<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<title>Philosophical Multicore</title>
		<description>Writings on philosophy, investing, and altruism from a person with some thoughts on philosophy, investing, and altruism</description>
		<link>http://mdickens.me</link>
		<atom:link href="http://mdickens.me/feed.xml" rel="self" type="application/rss+xml" />
        <pubDate>Tue, 04 Nov 2025 16:50:54 -0800</pubDate>
        <lastBuildDate>Tue, 04 Nov 2025 16:50:54 -0800</lastBuildDate>
        <generator>Jekyll v4.3.4</generator>
		
			<item>
				<title>Do Small Protests Work?</title>
				<pubDate>Tue, 04 Nov 2025 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2025/11/04/do_small_protests_work/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/11/04/do_small_protests_work/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; The available evidence is weak. It looks like small protests may be effective at garnering support among the general public. Policy-makers appear to be more sensitive to protest size, and it’s not clear whether small protests have a positive or negative effect on their perception.&lt;/p&gt;

&lt;p&gt;Previously, I &lt;a href=&quot;https://mdickens.me/2025/04/18/protest_outcomes_critical_review/&quot;&gt;reviewed&lt;/a&gt; evidence from natural experiments and concluded that protests work (credence: 90%).&lt;/p&gt;

&lt;p&gt;My biggest outstanding concern is that all the protests I reviewed were nationwide, whereas the causes I care most about (AI safety, animal welfare) can only put together small protests. Based on the evidence, I’m pretty confident that large protests work. But what about small ones?&lt;/p&gt;

&lt;p&gt;I can see arguments in both directions.&lt;/p&gt;

&lt;p&gt;On the one hand, people are &lt;a href=&quot;https://en.wikipedia.org/wiki/Scope_neglect&quot;&gt;scope insensitive&lt;/a&gt;. I’m pretty sure that a 20,000-person protest is much less than twice as impactful as a 10,000-person protest. And this principle may extend down to protests that only include 10–20 people.&lt;/p&gt;

&lt;p&gt;On the other hand, a large protest and a small protest may send different messages. People might see a small protest and think, “Why aren’t there more people here? This cause must not be very important.” So even if large protests work, it’s conceivable that small protests could backfire.&lt;/p&gt;

&lt;p&gt;What does the scientific literature say about which of those ideas is correct?&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#evidence-from-nationwide-natural-experiments&quot; id=&quot;markdown-toc-evidence-from-nationwide-natural-experiments&quot;&gt;Evidence from nationwide natural experiments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#direct-evidence-from-lab-experiments&quot; id=&quot;markdown-toc-direct-evidence-from-lab-experiments&quot;&gt;Direct evidence from lab experiments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#indirect-evidence-from-lab-experiments&quot; id=&quot;markdown-toc-indirect-evidence-from-lab-experiments&quot;&gt;Indirect evidence from lab experiments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#non-experimental-evidence&quot; id=&quot;markdown-toc-non-experimental-evidence&quot;&gt;Non-experimental evidence&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#future-work&quot; id=&quot;markdown-toc-future-work&quot;&gt;Future work&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-table-of-papers-from-orazani-et-al-2021&quot; id=&quot;markdown-toc-appendix-table-of-papers-from-orazani-et-al-2021&quot;&gt;Appendix: Table of papers from Orazani et al. (2021)&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;evidence-from-nationwide-natural-experiments&quot;&gt;Evidence from nationwide natural experiments&lt;/h2&gt;

&lt;p&gt;Among the studies in my prior &lt;a href=&quot;https://mdickens.me/2025/04/18/protest_outcomes_critical_review/&quot;&gt;lit review&lt;/a&gt;, two studies modeled how voter outcomes varied based on the number of protesters in each county. Unfortunately, they used linear models, which doesn’t help us. We want to know if there’s a &lt;em&gt;non&lt;/em&gt;-linearity near zero—something that looks like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/protest-nonlinear.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;A linear model can’t tell you what the shape of the curve looks like, or whether it dips into the negative for sufficiently small protests.&lt;/p&gt;

&lt;p&gt;(In theory, I could analyze the raw data myself, but that would be a lot of work.)&lt;/p&gt;

&lt;h2 id=&quot;direct-evidence-from-lab-experiments&quot;&gt;Direct evidence from lab experiments&lt;/h2&gt;

&lt;p&gt;To my knowledge, there are two experiments that directly tested whether the size of a protest affected people’s support for a cause.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/materials/Demonstrating Power.pdf&quot;&gt;Wouters &amp;amp; Walgrave (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; showed (fictitious) news articles to Belgian legislators. The news articles said either “There were about 500 participants which was much less than expected”, or “There were more than 5,000 participants which was more than expected.”&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; The authors also altered three other independent variables, which they called Worthiness, Unity, and Commitment. Then they asked participants questions to judge how much they agreed with protesters (“position”) and whether they intended to take any actions to support protesters (“action”). Below I present the resulting regression coefficients and p-values.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;position&lt;/th&gt;
      &lt;th&gt;p-val&lt;/th&gt;
      &lt;th&gt;action&lt;/th&gt;
      &lt;th&gt;p-val&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;numbers&lt;/td&gt;
      &lt;td&gt;0.282&lt;/td&gt;
      &lt;td&gt;0.008&lt;/td&gt;
      &lt;td&gt;0.439&lt;/td&gt;
      &lt;td&gt;0.000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;worthiness&lt;/td&gt;
      &lt;td&gt;0.381&lt;/td&gt;
      &lt;td&gt;0.000&lt;/td&gt;
      &lt;td&gt;0.116&lt;/td&gt;
      &lt;td&gt;0.297&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;unity&lt;/td&gt;
      &lt;td&gt;0.353&lt;/td&gt;
      &lt;td&gt;0.001&lt;/td&gt;
      &lt;td&gt;0.350&lt;/td&gt;
      &lt;td&gt;0.002&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commitment&lt;/td&gt;
      &lt;td&gt;0.190&lt;/td&gt;
      &lt;td&gt;0.300&lt;/td&gt;
      &lt;td&gt;0.156&lt;/td&gt;
      &lt;td&gt;0.161&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Numbers had biggest or near-biggest p-values out of the four variables, which suggests that legislators care a lot about the size of a protest. However, this study did not include a control group, so we don’t know whether smaller protests had a positive effect, a negative effect, or no effect.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/materials/Persuasive Power of Protest (Wouters 2019).pdf&quot;&gt;Wouters (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; conducted two similar studies, this time interviewing members of the general public rather than legislators. They described protest sizes the same way as Wouters &amp;amp; Walgrave (2017) (“500, less than expected” vs. “5,000, more than expected”), and again used four independent variables. Below are the regression coefficients and p-values from the two different studies, where the dependent variable was participants’ support for the cause.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;study 1&lt;/th&gt;
      &lt;th&gt;p-val&lt;/th&gt;
      &lt;th&gt;study 2&lt;/th&gt;
      &lt;th&gt;p-val&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;numbers&lt;/td&gt;
      &lt;td&gt;0.094&lt;/td&gt;
      &lt;td&gt;0.071&lt;/td&gt;
      &lt;td&gt;0.063&lt;/td&gt;
      &lt;td&gt;0.291&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;diversity&lt;/td&gt;
      &lt;td&gt;0.168&lt;/td&gt;
      &lt;td&gt;0.001&lt;/td&gt;
      &lt;td&gt;0.131&lt;/td&gt;
      &lt;td&gt;0.029&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;worthiness&lt;/td&gt;
      &lt;td&gt;0.607&lt;/td&gt;
      &lt;td&gt;0.000&lt;/td&gt;
      &lt;td&gt;1.127&lt;/td&gt;
      &lt;td&gt;0.000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;unity&lt;/td&gt;
      &lt;td&gt;0.201&lt;/td&gt;
      &lt;td&gt;0.000&lt;/td&gt;
      &lt;td&gt;0.126&lt;/td&gt;
      &lt;td&gt;0.034&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;In this case, we find that numbers matter less than the other three factors.&lt;/p&gt;

&lt;p&gt;Taken together, these two papers suggest:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Legislators care a lot about protest size. The general public maybe cares a bit, but not much.&lt;/li&gt;
  &lt;li&gt;Even (comparatively) small protests are effective at garnering support from the general public. It’s not clear whether they are effective for legislators.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(If small protests turned off the general public, then we would see that the “numbers” variable has good predictive power, but it doesn’t.)&lt;/p&gt;

&lt;h2 id=&quot;indirect-evidence-from-lab-experiments&quot;&gt;Indirect evidence from lab experiments&lt;/h2&gt;

&lt;p&gt;Wouters &amp;amp; Walgrave (2017)&lt;sup id=&quot;fnref:1:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and Wouters (2019)&lt;sup id=&quot;fnref:2:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; were the only two papers I could find that directly tested the effect of protest size. But there could also be indirect evidence. I’m imagining something like this:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A lab experiment showed people either a new article about a protest, or a “control” news article. People who read about the protest were [more/less] supportive of the protesters’ cause.&lt;/li&gt;
  &lt;li&gt;The protest described in the article happened to be small.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That result would provide evidence about how small protests influence people.&lt;/p&gt;

&lt;p&gt;To see if there was something like that, I looked through the studies cited by a meta-analysis by &lt;a href=&quot;https://mdickens.me/materials/Protest%20Meta-Analysis.pdf&quot;&gt;Orazani et al. (2021)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;. I found two relevant papers: &lt;a href=&quot;/materials/thomas2013.pdf&quot;&gt;Thomas &amp;amp; Louis (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2911177&quot;&gt;Feinberg et al. (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;. (See &lt;a href=&quot;#appendix-table-of-papers-from-orazani-et-al-2021&quot;&gt;Appendix&lt;/a&gt; for a list of every paper.)&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/materials/thomas2013.pdf&quot;&gt;Thomas &amp;amp; Louis (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:4:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; did two experiments comparing participants’ reactions to news articles about violent vs. nonviolent protests. The two experiments were more or less the same, except that Experiment 1 covered &lt;a href=&quot;https://en.wikipedia.org/wiki/Fracking&quot;&gt;fracking&lt;/a&gt; protests and Experiment 2 was about anti-whaling activism. Unfortunately the contents of the news articles are not publicly available and the corresponding author did not reply to my inquiry, so I don’t know how the protests were described in terms of size.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2911177&quot;&gt;Feinberg et al. (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:5:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; included three studies. Each study presented participants with an article or video about a different protest.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Study 1: The articles described a fictitious animal rights group. Protest size was reported in the articles as “about thirty people”.&lt;/li&gt;
  &lt;li&gt;Study 2: The articles described a Black Lives Matter march. The number of protesters was not specified in the articles.&lt;/li&gt;
  &lt;li&gt;Study 3: Participants were shown videos of Trump protests. One video showed a protest with roughly 70 participants&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; but I don’t know how many protesters were in the other video.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In study 1, study participants reported relatively high support for protesters in the “Moderate” condition, in which a fictional animal rights group picketed a cosmetics company. (In the “Extreme” condition, the protesters broke into the building and freed animals.) However, there was no control group (!!), so we don’t know if reading about the protest caused support to go up, or if support would’ve been high anyway. The protest was described as having only thirty people, so this would’ve been useful evidence if they’d included a control group, but they didn’t.&lt;/p&gt;

&lt;p&gt;One thing we can say about study 1 is that &lt;em&gt;if&lt;/em&gt; small protests reduce support, then they don’t reduce support by as much as “extreme” protests do.&lt;/p&gt;

&lt;p&gt;There is one additional paper, &lt;a href=&quot;https://doi.org/10.1177/2378023120925949&quot;&gt;Bugden (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;, that was not included in the Orazani et al. meta-analysis.&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; It showed participants articles in four conditions: a control,&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; peaceful protest, disruptive protest, and violent protest. The peaceful protest article (found in the &lt;a href=&quot;https://journals.sagepub.com/doi/suppl/10.1177/2378023120925949/suppl_file/online_supplementary_materials_socius.docx&quot;&gt;supplement document&lt;/a&gt;) opened with:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;On Thursday, thousands of protestors took to the streets as the state legislature prepares to vote on a climate change bill.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This protest was not small, so Bugden (2020) doesn’t provide relevant evidence.&lt;/p&gt;

&lt;h2 id=&quot;non-experimental-evidence&quot;&gt;Non-experimental evidence&lt;/h2&gt;

&lt;p&gt;An observational study by &lt;a href=&quot;https://doi.org/10.1038/s41893-024-01444-1&quot;&gt;Ostarek et al. (2024)&lt;/a&gt;&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; studied the effect of a disruptive protest by the climate group Just Stop Oil in which protesters blockaded a highway. By running polls before and after, the researchers found that support for a more &lt;em&gt;moderate&lt;/em&gt; climate group, Friends of the Earth, increased just after the Just Stop Oil protest.&lt;/p&gt;

&lt;p&gt;The protest consisted of 45 people (&lt;a href=&quot;https://www.independent.co.uk/news/uk/crime/roger-hallam-m25-just-stop-oil-court-of-appeal-police-b2582094.html&quot;&gt;source&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;At first glance, this appears to indicate that small protests can be effective. But I’m not sure that’s an appropriate interpretation of the evidence, because:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It was an observational study, not an experiment or even a natural experiment.&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Other studies have found negative effects for disruptive protests.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;None of the evidence I found was very good.&lt;/p&gt;

&lt;p&gt;Here are my takeaways, but given the state of the evidence, I don’t have much confidence in them.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Large protests work better than small protests at garnering support among policy-makers. (credence: 80%)&lt;/li&gt;
  &lt;li&gt;The general public probably doesn’t greatly care about the size of a protest. (credence: 60%)&lt;/li&gt;
  &lt;li&gt;Small protests can probably be effective at garnering support among the general public. (credence: 60%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Do small protests persuade the general public? Looks like yes (but, at the risk of repeating myself, the evidence was not strong).&lt;/p&gt;

&lt;p&gt;Do small protests persuade policy-makers? I couldn’t find any evidence either way. (But the fact that I couldn’t find anything is weak evidence against.)&lt;/p&gt;

&lt;h2 id=&quot;future-work&quot;&gt;Future work&lt;/h2&gt;

&lt;p&gt;I see two obvious ways to learn more about how well small protests work. They’re out of scope for this post, but they wouldn’t be too hard.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Analyze the data collected in &lt;a href=&quot;https://mdickens.me/2025/04/18/protest_outcomes_critical_review/&quot;&gt;natural experiments&lt;/a&gt; and use a non-linear model to assess the effectiveness of small protests.&lt;/li&gt;
  &lt;li&gt;Run a new survey (on Mechanical Turk or similar) showing people small protests vs. large protests. vs. no protests and then ask them about their opinions on the protesters’ cause.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;appendix-table-of-papers-from-orazani-et-al-2021&quot;&gt;Appendix: Table of papers from Orazani et al. (2021)&lt;/h2&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Paper&lt;/th&gt;
      &lt;th&gt;Status&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Thomas &amp;amp; Louis (2014)&lt;/td&gt;
      &lt;td&gt;included useful information&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Orazani &amp;amp; Leidner (2018)&lt;/td&gt;
      &lt;td&gt;I couldn’t find the full text&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Becker et al. (2011)&lt;/td&gt;
      &lt;td&gt;dependent variable was not relevant&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Feinberg et al. (2017)&lt;/td&gt;
      &lt;td&gt;included useful information&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Gutting (2017)&lt;/td&gt;
      &lt;td&gt;dependent variable was not relevant&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Leggett (2010)&lt;/td&gt;
      &lt;td&gt;unpublished&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Shuman et al.&lt;/td&gt;
      &lt;td&gt;unpublished&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Wouters, R., &amp;amp; Walgrave, S. (2017). &lt;a href=&quot;https://doi.org/10.1177/0003122417690325&quot;&gt;Demonstrating Power.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:1:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I suspect that the phrases “less/more than expected” have a bigger effect on people’s perception than the numbers themselves. But this hypothesis hasn’t been tested. Some evidence for my hypothesis is that &lt;a href=&quot;https://doi.org/10.1093/qje/qjt021&quot;&gt;Madestam et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; (which I &lt;a href=&quot;https://mdickens.me/2025/04/18/protest_outcomes_critical_review/#madestam-et-al-2013-on-tea-party-protests&quot;&gt;reviewed previously&lt;/a&gt;) found a strong county-level effect on protests, and the average protest size was 815 people per county, which is much closer to the “small” condition (500 people) than the “large” condition (5,000). So my guess is that 500 only sounds small because the article presented it as “less than expected”. However, the protests studied in Madestam et al. (2013) might differ in other meaningful ways. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Wouters, R. (2019). &lt;a href=&quot;https://doi.org/10.1093/sf/soy110&quot;&gt;The Persuasive Power of Protest. How Protest wins Public Support.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:2:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Orazani, N., Tabri, N., Wohl, M. J. A., &amp;amp; Leidner, B. (2021). &lt;a href=&quot;https://doi.org/10.1002/EJSP.2722&quot;&gt;Social movement strategy (nonviolent vs. violent) and the garnering of third‐party support: A meta‐analysis.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1002/ejsp.2722&quot;&gt;10.1002/ejsp.2722&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Thomas, E. F., &amp;amp; Louis, W. R. (2013). &lt;a href=&quot;https://doi.org/10.1177/0146167213510525&quot;&gt;When Will Collective Action Be Effective? Violent and Non-Violent Protests Differentially Influence Perceptions of Legitimacy and Efficacy Among Sympathizers.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:4:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Feinberg, M., Willer, R., &amp;amp; Kovacheff, C. (2017). &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.2911177&quot;&gt;Extreme Protest Tactics Reduce Popular Support for Social Movements.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:5:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Source: I watched the video and counted how many people I could see. Some of the people were clearly bystanders, not protesters, but others were ambiguous so I’m not sure about the exact count. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bugden, D. (2020). &lt;a href=&quot;https://doi.org/10.1177/2378023120925949&quot;&gt;Does Climate Protest Work? Partisanship, Protest, and Sentiment Pools.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Even though Orazani et al. was published in 2021, its literature review was conducted in 2018. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The control condition simply noted that protests exist without describing them at all, and asked participants if they supported the protesters’ cause. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ostarek, M., Simpson, B., Rogers, C., &amp;amp; Ozden, J. (2024). &lt;a href=&quot;https://doi.org/10.1038/s41893-024-01444-1&quot;&gt;Radical climate protests linked to increases in public support for moderate organizations.&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;See also a less-technical 2022 preprint at &lt;a href=&quot;https://www.socialchangelab.org/_files/ugd/503ba4_a184ae5bbce24c228d07eda25566dc13.pdf&quot;&gt;https://www.socialchangelab.org/_files/ugd/503ba4_a184ae5bbce24c228d07eda25566dc13.pdf&lt;/a&gt;. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I don’t see how the change wouldn’t be causal, but that could just be a failure of imagination on my part. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Madestam, A., Shoag, D., Veuger, S., &amp;amp; Yanagizawa-Drott, D. (2013). &lt;a href=&quot;https://doi.org/10.1093/qje/qjt021&quot;&gt;Do Political Protests Matter? Evidence from the Tea Party Movement*.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>My Third Caffeine Self-Experiment</title>
				<pubDate>Mon, 03 Nov 2025 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2025/11/03/third_caffeine_self-experiment/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/11/03/third_caffeine_self-experiment/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Last year I did a &lt;a href=&quot;https://mdickens.me/2024/04/11/caffeine_self_experiment/&quot;&gt;caffeine cycling self-experiment&lt;/a&gt; and I determined that I don’t get habituated to caffeine when I drink coffee three days a week. I did a &lt;a href=&quot;https://mdickens.me/2024/06/24/continuing_caffeine_self_experiment/&quot;&gt;follow-up experiment&lt;/a&gt; where I upgraded to &lt;em&gt;four&lt;/em&gt; days a week (Mon/Wed/Fri/Sat) and I found that I &lt;em&gt;still&lt;/em&gt; don’t get habituated.&lt;/p&gt;

&lt;p&gt;For my current weekly routine, I have caffeine on Monday, Wednesday, Friday, and Saturday. Subjectively, I often feel low-energy on Saturdays. Is that because the caffeine I took on Friday is having an aftereffect that makes me more tired on Saturday?&lt;/p&gt;

&lt;p&gt;When I ran my second experiment, I took caffeine four days, including the three-day stretch of Wednesday-Thursday-Friday. I found that my performance on a reaction time test was comparable between Wednesday and Friday. If my reaction time stayed the same after taking caffeine three days in a row, that’s evidence that I didn’t develop a tolerance over the course of those three days.&lt;/p&gt;

&lt;p&gt;But if three days isn’t long enough for me to develop a tolerance, why is it that lately I feel tired on Saturdays, after taking caffeine for only two days in a row? Was the result from my last experiment incorrect?&lt;/p&gt;

&lt;p&gt;So I decided to do another experiment to get more data.&lt;/p&gt;

&lt;p&gt;This time I did a new six-week self-experiment where I kept my current routine, but I tested my reaction time every day. I wanted to test two hypotheses:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Is my post-caffeine reaction time worse on Saturday than on Mon/Wed/Fri?&lt;/li&gt;
  &lt;li&gt;Is my reaction time worse on the morning after a caffeine day than on the morning after a caffeine-free day?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first hypothesis tests whether I become habituated to caffeine, and the second hypothesis tests whether I experience withdrawal symptoms the following morning.&lt;/p&gt;

&lt;p&gt;The answers I got were:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;No, there’s no detectable difference.&lt;/li&gt;
  &lt;li&gt;No, there’s no detectable difference.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Therefore, in defiance of my subjective experience—but in agreement with my earlier experimental results—I do not become detectably habituated to caffeine on the second day.&lt;/p&gt;

&lt;p&gt;However, it’s possible that caffeine habituation affects my &lt;em&gt;fatigue&lt;/em&gt; even though it doesn’t affect my &lt;em&gt;reaction time&lt;/em&gt;. So it’s hard to say for sure what’s going on without running more tests (which I may do at some point).&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#experimental-procedure&quot; id=&quot;markdown-toc-experimental-procedure&quot;&gt;Experimental procedure&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#results&quot; id=&quot;markdown-toc-results&quot;&gt;Results&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#alternative-experimental-procedures-that-im-not-going-to-do&quot; id=&quot;markdown-toc-alternative-experimental-procedures-that-im-not-going-to-do&quot;&gt;Alternative experimental procedures that I’m not going to do&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#a-story-about-how-i-thought-my-experiment-failed-but-actually-i-was-just-being-stupid&quot; id=&quot;markdown-toc-a-story-about-how-i-thought-my-experiment-failed-but-actually-i-was-just-being-stupid&quot;&gt;A story about how I thought my experiment failed, but actually I was just being stupid&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;experimental-procedure&quot;&gt;Experimental procedure&lt;/h2&gt;

&lt;p&gt;As &lt;a href=&quot;https://mdickens.me/2024/03/02/caffeine_tolerance/#appendix-b-pre-registration-for-a-caffeine-self-experiment&quot;&gt;with my previous experiments&lt;/a&gt;, I took a reaction time test every morning before caffeine, as well as an hour after caffeine on days when I took it (Mon/Wed/Fri/Sat). I ran the test for six weeks.&lt;/p&gt;

&lt;p&gt;This experiment had the same flaws as my previous experiments, e.g., I did not blind myself because blinding myself is annoying and I didn’t feel like doing it.&lt;/p&gt;

&lt;p&gt;In my first two experiments, I was meticulous about controlling the conditions on my computer during the reaction time test. I always tested using the &lt;a href=&quot;https://humanbenchmark.com/tests/reactiontime&quot;&gt;humanbenchmark.com test&lt;/a&gt; in Chrome with a single browser window open. I normally use Firefox, but I tested in a different browser to be sure that my 100+ open Firefox tabs wouldn’t interfere with the test in any way (perhaps background tasks could slow down the JavaScript code that runs the reaction time app, which could artificially inflate my reaction time). I tested without any other applications open on my computer except for Emacs and a terminal window (which I always have open).&lt;/p&gt;

&lt;p&gt;For my most recent experiment, I wasn’t so meticulous about it because I wanted to be lazy and I figured it probably didn’t matter. I still did the reaction time test in Chrome, but I didn’t close Firefox or other applications during the test.&lt;/p&gt;

&lt;h2 id=&quot;results&quot;&gt;Results&lt;/h2&gt;

&lt;p&gt;First, I tested to see if caffeine even made a visible difference in reaction time. &lt;a href=&quot;https://mdickens.me/2024/04/11/caffeine_self_experiment/&quot;&gt;Last time&lt;/a&gt;, caffeine had a strong and readily apparent effect on my reaction time. My third experiment replicated this result:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;caffeine vs. no-caffeine:
    298.0 ms vs. 303.5 ms
    t-stat = -2.9, p-value = 0.006
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;However, my reaction time was noticeably worse than in the previous two experiments. My average used to hover around 280 ms and now it was hovering around 300 ms. Perhaps because I was less meticulous about keeping my computer in consistent conditions, I ended up adding some latency to the reaction time app?&lt;/p&gt;

&lt;p&gt;Some evidence for this hypothesis is that I’ve tried testing my reaction time on Windows a few times (I normally use Linux) and it’s &lt;em&gt;much&lt;/em&gt; faster—more like 230 ms. This is almost certainly due to a difference in how the reaction time app works on Windows vs. Linux.&lt;/p&gt;

&lt;p&gt;My primary hypothesis test—which I pre-registered to myself, but did not pre-register publicly—was to compare post-caffeine reaction time performance on Saturday vs. the average of every other caffeine day (Mon/Wed/Fri). This test got a null result:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Saturdays vs. non-Saturday caffeine days:
    297.2 ms vs. 298.3 ms
    t-stat = -0.4, p-value = 0.697
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I felt generally worse on Saturdays, but perhaps I was imagining things or seeing patterns that weren’t there, and really I shouldn’t worry about it.&lt;/p&gt;

&lt;p&gt;Or perhaps I do actually feel worse on the second caffeine day, in a way that reaction time fails to capture. It’s possible that caffeine’s different effects habituate at different rates, and I’m losing my alertness faster than I’m losing my reaction speed.&lt;/p&gt;

&lt;p&gt;(I would guess that caffeine’s effect on exercise performance would habituate particularly slowly—as I understand, caffeine improves exercise by physiologically improving muscle function somehow (it enhances calcium circulation or something), not just by increasing alertness.)&lt;/p&gt;

&lt;p&gt;My second hypothesis was that I experience caffeine withdrawal on the morning after a caffeine day. I got a null result for this hypothesis as well:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;morning after caffeine vs. morning after nocaf:
    303.6 ms (sd 6.9) vs. 303.5 ms (sd 7.0)
    mean difference = 0.1 ms
    t-stat = 0.0, p-value = 0.987
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;(Before running the experiment, I had a vague idea that I wanted to test this hypothesis, but I didn’t mentally pre-register a methodology.)&lt;/p&gt;

&lt;h2 id=&quot;alternative-experimental-procedures-that-im-not-going-to-do&quot;&gt;Alternative experimental procedures that I’m not going to do&lt;/h2&gt;

&lt;p&gt;It could be that my &lt;em&gt;reaction time&lt;/em&gt; doesn’t get worse on the second day, but my &lt;em&gt;alertness&lt;/em&gt; does get worse. I can think of two methods to test that hypothesis, but I don’t want to do them.&lt;/p&gt;

&lt;p&gt;Method 1: Same procedure as before, but instead of using a reaction time test as the independent variable, I subjectively rate my alertness. This seems not good because it’s unblinded. I’m not too concerned about blinding reaction time because it’s hard to placebo yourself into a faster reaction time, but “subjective rating of alertness” is exactly the sort of thing that’s highly prone to a placebo effect.&lt;/p&gt;

&lt;p&gt;Method 2: Randomize whether I take caffeine pills or placebo pills, and blind myself. To detect potential habituation, I can take the same pill two days in a row, but blind myself to what type of pill it is. Then I subjectively rate my alertness. I don’t want to do that either because it would require working out without caffeine 50% of the time, and working out without caffeine is unpleasant.&lt;/p&gt;

&lt;h2 id=&quot;a-story-about-how-i-thought-my-experiment-failed-but-actually-i-was-just-being-stupid&quot;&gt;A story about how I thought my experiment failed, but actually I was just being stupid&lt;/h2&gt;

&lt;p&gt;After completing my experiment—this was about three months ago&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;—I wrote some code to test the hypotheses. To my dismay, I found no detectable difference between caffeine and no-caffeine reaction times:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;caffeine vs. no-caffeine:
    298.3 ms vs. 303.5 ms
	t-stat = 0.0, p-value = 0.987
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If there’s not even a difference between caffeine and no-caffeine, then the experiment is useless.&lt;/p&gt;

&lt;p&gt;At the time, I was too tired and demotivated to write up the results, so I abandoned it for a while.&lt;/p&gt;

&lt;p&gt;Eventually I decided to finally write up the results of my experiment again. I looked at the numbers and I noticed that they didn’t make any sense. If the difference between caffeine and no-caffeine was 5.2 ms, how was the t-stat 0.0?&lt;/p&gt;

&lt;p&gt;You may be able to see the mistake I made if you look at the numbers from the &lt;a href=&quot;#results&quot;&gt;Results&lt;/a&gt; section. Instead of printing the t-stat and p-value for the caffeine vs. no-caffeine t-test, I accidentally printed the numbers from the &lt;em&gt;morning after caffeine vs. morning after nocaf&lt;/em&gt; test. So the figures I was looking at were totally wrong.&lt;/p&gt;

&lt;p&gt;I guess I wasn’t 100% there mentally when I wrote the code. (Honestly I don’t think I was even 30% there.)&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;If you want to check if my code contains any other horrible mistakes, you can find it &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/tree/master/caffeine&quot;&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I have a bad habit of letting half-finished drafts sit in my drafts folder for a long time. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I wrote it on a non-caffeine day which might have something to do with it. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Things I've Become More Confident About</title>
				<pubDate>Sun, 02 Nov 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/11/02/things_ive_become_more_confident_about/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/11/02/things_ive_become_more_confident_about/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Last year, I wrote a list of &lt;a href=&quot;https://mdickens.me/2024/05/23/some_things_ive_changed_my_mind_on/&quot;&gt;things I’ve changed my mind on&lt;/a&gt;. But good truth-seeking doesn’t just require you to consider where you might be wrong; you must also consider where you might be &lt;strong&gt;right&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this post, I provide some beliefs I used to be uncertain about, that I have come to believe more strongly.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My belief:&lt;/strong&gt; Evolution is true.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Why I believed it originally:&lt;/strong&gt; I learned about the theory of evolution in school. I had the impression that it was a popular but unproven hypothesis (“just a theory”).&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What made me more confident:&lt;/strong&gt; When I was maybe 10 or 12, I read an article in some science magazine (&lt;em&gt;National Geographic&lt;/em&gt;, maybe) about evolution. It said, “evolution is a theory in the same way atoms are a theory.” I probably put too much credence in this one sentence in one article, but in my mind, this was definitive proof that evolution is true.&lt;/p&gt;

    &lt;p&gt;Later, when I was 14, I started getting interested in the specifics of the theory of evolution and learned much more about the supporting evidence. (My motivation was mostly that I wanted to argue with creationists on the internet.)&lt;/p&gt;

    &lt;p&gt;I went through a similar trajectory when learning about &lt;a href=&quot;https://en.wikipedia.org/wiki/Quark&quot;&gt;quarks&lt;/a&gt;. I was taught that a quark is a hypothetical particle that exists inside atoms, but has never been observed. Later I learned that the existence of quarks is well-established, and it became well-established nearly three decades before I was born.&lt;/p&gt;

    &lt;p&gt;On the subject of outdated pedagogy, this is a bit of a tangent but in 5th grade I was taught the &lt;a href=&quot;https://en.wikipedia.org/wiki/Kingdom_(biology)#Five_kingdoms&quot;&gt;five kingdoms of life&lt;/a&gt;: monerans, protists, fungi, plants, and animals. Recently, I learned that not only do biologists no longer use this classification system, but that it was already obsolete &lt;em&gt;when my 5th grade teacher was in 5th grade.&lt;/em&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

    &lt;p&gt;(My 5th grade teacher was pretty young, but still.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My belief:&lt;/strong&gt; &lt;a href=&quot;https://en.wikipedia.org/wiki/Value_investing&quot;&gt;Value investing&lt;/a&gt; works.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Why I believed it originally:&lt;/strong&gt; I read about Joel Greenblatt’s &lt;a href=&quot;https://en.wikipedia.org/wiki/Magic_formula_investing&quot;&gt;magic formula investing&lt;/a&gt; and its strong historical performance.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What made me more confident:&lt;/strong&gt; I read more research on value investing, including the seminal paper &lt;a href=&quot;https://doi.org/10.1111/j.1540-6261.1992.tb04398.x&quot;&gt;The Cross-Section of Expected Stock Returns&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; by Fama and French, and more in-depth research showing value investing has worked &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.2174501&quot;&gt;across the world and across asset classes&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;, and on older data &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.3325720&quot;&gt;going back 200 years&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My belief:&lt;/strong&gt; Peaceful protests can be effective.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Why I believed it originally:&lt;/strong&gt; I actually went back and forth on this one. In school I learned about Martin Luther King and how he was a hero of the civil rights movement, the Montgomery Bus Boycott that he helped organize, Gandhi’s protests against colonialism, and implicit in all this was the idea that these tactics were effective.&lt;/p&gt;

    &lt;p&gt;Eventually I learned about &lt;a href=&quot;http://givewell.org/&quot;&gt;GiveWell&lt;/a&gt;, which was the first time I’d ever encountered the notion that just because a charity says it’s effective, doesn’t mean it’s actually effective. I started thinking critically about protests in the same way, and I realized that I’d never actually seen good evidence that MLK or Gandhi were responsible for the positive changes that coincided with their activism.&lt;/p&gt;

    &lt;p&gt;Then I started thinking, well, there’s not &lt;em&gt;strong&lt;/em&gt; evidence that protests work, but there’s at least &lt;em&gt;some&lt;/em&gt; reason to believe they work. That’s about where I was at in 2024 when I &lt;a href=&quot;https://mdickens.me/2024/11/18/where_i_am_donating_in_2024/&quot;&gt;donated to PauseAI&lt;/a&gt;—I thought, I don’t really know if this is gonna work, but it’s worth trying.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What made me more confident:&lt;/strong&gt; I wrote &lt;a href=&quot;https://mdickens.me/2025/04/18/protest_outcomes_critical_review/&quot;&gt;Do Protests Work? A Critical Review&lt;/a&gt;, in which I carefully investigated the strongest evidence I could find. I found that the best evidence was better than I’d expected, and it pointed toward peaceful protests being effective.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My belief:&lt;/strong&gt; Seed oils are good for you; seed oils don’t cause obesity.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Why I believed it originally:&lt;/strong&gt; I had never heard of the seed oil-obesity hypothesis until I read &lt;a href=&quot;https://dynomight.net/seed-oil/&quot;&gt;Dynomight’s article&lt;/a&gt; on the subject, which argues &lt;em&gt;against&lt;/em&gt; the hypothesis. Dynomight presented some evidence that seed oils are harmful and then ultimately concluded that they’re not. I didn’t think much about the evidence the article gave, but its conclusion seemed reasonable to me.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What made me more confident:&lt;/strong&gt; I researched the issue in more depth while writing &lt;a href=&quot;https://mdickens.me/2024/09/26/outlive_a_critical_review/&quot;&gt;Outlive: A Critical Review&lt;/a&gt;, specifically the &lt;a href=&quot;https://mdickens.me/2024/09/26/outlive_a_critical_review/#the-data-are-unclear-on-whether-reducing-saturated-fat-intake-is-beneficial&quot;&gt;section on saturated fat&lt;/a&gt;. I looked through the literature and presented what I believed to be the strongest evidence on the matter: meta-analyses of RCTs that directly compared dietary saturated fat with unsaturated fat (which usually meant seed oils). The experimental evidence finds that seed oils are, if anything, healthier than saturated fat, which contradicts the seed oil-obesity hypothesis.&lt;/p&gt;

    &lt;p&gt;I read some writings by proponents of the seed oil hypothesis, and their arguments seemed &lt;a href=&quot;https://mdickens.me/2024/10/12/worst_argument_in_the_world/&quot;&gt;incredibly weak&lt;/a&gt; to me.&lt;/p&gt;

    &lt;p&gt;(Later, I re-read &lt;a href=&quot;https://dynomight.net/seed-oil/&quot;&gt;Dynomight’s article&lt;/a&gt; and found that it cited the same evidence I had looked at while writing my review of &lt;em&gt;Outlive&lt;/em&gt;, which I had completely forgotten about.)&lt;/p&gt;

    &lt;p&gt;Dynomight presented the seed oil hypothesis as reasonable but ultimately probably wrong, so that’s what I believed at the time. After examining the evidence in more depth, I don’t think the seed oil hypothesis is reasonable. Dynomight admirably followed Daniel Dennett’s &lt;a href=&quot;https://www.themarginalian.org/2014/03/28/daniel-dennett-rapoport-rules-criticism/&quot;&gt;principles for arguing intelligently&lt;/a&gt;, in which you present your opponent’s case as strongly as possible. But this gave me impression that the seed oil hypothesis is more plausible than it actually is.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My belief:&lt;/strong&gt; Absent regulation, we aren’t going to solve the AI alignment problem in time.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Why I believed it originally:&lt;/strong&gt; I’ve vaguely believed this since I first learned about the AI alignment problem (in 2013, if I remember correctly). The problem seemed to involve some thorny philosophical problems of unknown size, like the outline of an enormous beast under a murky ocean. But at that point, humanity had collectively only spent a few hundred person-years on AI alignment, and I thought, perhaps there will be some breakthrough that makes the problem turn out to be much easier than expected. Or perhaps as superintelligent AI becomes increasingly imminent, humanity will rally and pour the necessary resources into the problem.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What made me more confident:&lt;/strong&gt; In this case I haven’t much changed my interpretation of the evidence, but I’ve become more confident as new evidence has come out. Namely, AI has gotten extraordinarily more powerful; alignment work has not kept up with the increases in AI capabilities; even though alignment work gets more attention now, the problem still seems about as hard as ever.&lt;/p&gt;

    &lt;p&gt;Beyond that, almost all alignment work is &lt;a href=&quot;https://en.wikipedia.org/wiki/Streetlight_effect&quot;&gt;streetlight effect&lt;/a&gt;-ing, focused on solving tractable but mostly-irrelevant problems; and the frontier AI companies mostly don’t engage with, and are sometimes even actively hostile to, the idea that solving alignment will require major philosophical breakthroughs and it can’t be done using the sorts of empirical methods that they’re all using.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My belief:&lt;/strong&gt; Most studies on caffeine tolerance are not informative.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Why I believed it originally:&lt;/strong&gt; Prior to writing my post &lt;a href=&quot;https://mdickens.me/2024/03/29/does_caffeine_stop_working/&quot;&gt;Does Caffeine Stop Working?&lt;/a&gt;, I reviewed some studies on caffeine tolerance and I thought to myself, these studies aren’t even testing the hypothesis they claim to be testing, surely I must be missing something?&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What made me more confident:&lt;/strong&gt; I read the studies more carefully and spent more time thinking about them, and read a few contrary papers by other scientists who study caffeine. My more careful analysis only reinforced my initial belief that most studies on caffeine tolerance are, indeed, not useful.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My belief:&lt;/strong&gt; I am smart.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Why I believed it originally:&lt;/strong&gt; In elementary school, I knew I was the smartest kid in my class. But my class only had about 20 students, and I figured I wasn’t that smart in the grand scheme of things. Like, not as smart as scientists and people who go to Harvard and stuff.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What made me more confident:&lt;/strong&gt; The first big piece of evidence came after I took the &lt;a href=&quot;https://en.wikipedia.org/wiki/PSAT/NMSQT&quot;&gt;PSAT&lt;/a&gt; in 10th grade and my score was good enough that I realized I had a good shot at getting into a top university.&lt;/p&gt;

    &lt;p&gt;Then I actually attended a top university and realized that many of the people there were not that smart compared to me. College was still a big step up from elementary school: I went from always being the smartest person in the room to being only in the top 1/3 most of the time, and I sometimes found myself in the bottom third.&lt;/p&gt;

    &lt;p&gt;This trend of repeatedly up-rating my own intelligence reached its peak when I started taking advanced computer science classes, where I was close to the 50th percentile. And nowadays I’m about average within my social circles, and often below average.&lt;/p&gt;

    &lt;p&gt;(If you’re reading this, there’s a good chance that you’re smarter than me.)&lt;/p&gt;

    &lt;p&gt;Another canon event happened when I saw the data on the distribution of my school’s SAT scores. The school’s average score was just over one standard deviation &lt;em&gt;above&lt;/em&gt; the population mean.&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; I went through high school thinking my average classmates were average, when in reality they were considerably &lt;em&gt;smarter&lt;/em&gt; than average.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My belief:&lt;/strong&gt; When I first got into lifting weights a decade ago, I learned a lot of conventional wisdom like:&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;Low reps are better for strength, and high reps are better for hypertrophy.&lt;/li&gt;
      &lt;li&gt;Compound exercises are better for strength, and isolation exercises are better for hypertrophy.&lt;/li&gt;
      &lt;li&gt;Long rests are better for strength, and short rests are better for hypertrophy.&lt;/li&gt;
      &lt;li&gt;If you want to bulk or cut, you should eat at a 500 calorie surplus/deficit to gain/lose about a pound per week.&lt;/li&gt;
    &lt;/ul&gt;

    &lt;p&gt;&lt;strong&gt;Why I believed it originally:&lt;/strong&gt; It was the conventional wisdom—people generally agreed that these things are true, even though nobody talked about &lt;em&gt;why&lt;/em&gt;.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What made me more confident:&lt;/strong&gt; I started paying more attention to scientific literature on resistance training and I learned that the conventional wisdom pretty much had it right, at least on these points.&lt;/p&gt;

    &lt;p&gt;(The first three pieces of advice are all explained by a unifying factor: to build strength, you want to lift as much weight as possible, and to build muscle, you want to do as much volume as possible. High reps, isolation exercise, and short rests all enable you to wear out your muscles while lifting lighter weights, and the lighter the weights, the more volume you can do. These three bits of advice aren’t overwhelmingly important—you can still build muscle doing compound exercises at low reps—but they’re useful as guidelines.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My belief:&lt;/strong&gt; Exercise is good for you.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;Why I believed it originally:&lt;/strong&gt; Everyone says exercise is good for you, right? But I didn’t know how you’d demonstrate scientifically that that’s true. I thought perhaps it’s reverse causation (sick people can’t exercise) or confounded by socioeconomic class or something.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What made me more confident:&lt;/strong&gt; I learned more about the scientific evidence on exercise.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;Many randomized controlled trials show that exercise improves short-term health markers—it reduces blood pressure, improves blood sugar regulation, etc.&lt;/li&gt;
      &lt;li&gt;A smaller number of long-term trials show long-term health benefits to exercise.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;details&gt;
&lt;summary&gt;Spoilers for Game of Thrones / A Song of Ice and Fire. Click here to expand.&lt;/summary&gt;
&lt;p&gt;&lt;b&gt;My belief:&lt;/b&gt; R + L = J. That is, Jon Snow&apos;s parents are Lyanna Stark and Rhaegar Targaryen.&lt;/p&gt;
    
&lt;p&gt;&lt;b&gt;Why I believed it originally:&lt;/b&gt; This had long been a popular fan theory. I didn&apos;t figure it out on my own, but I was reasonably convinced by the evidence in &lt;a href=&quot;https://web.archive.org/web/20170320074820/https://towerofthehand.com/essays/chrisholden/jon_snows_parents.html&quot;&gt;this article&lt;/a&gt;. I thought it sounded right, but I was uncertain because the textual evidence wasn&apos;t conclusive.&lt;/p&gt;
    
&lt;p&gt;&lt;b&gt;What made me more confident:&lt;/b&gt; I watched an interview with David Benioff and Dan Weiss, the creators of the TV show. They told a story about how they met with George R. R. Martin to get him to agree to adapt his books. At some point in the meeting, he asked them: Who is Jon Snow&apos;s mother? They gave an answer, and he didn&apos;t say whether they were right, but he gave a knowing smile, and he agreed to let them make the TV show.&lt;/p&gt;
    
&lt;p&gt;They didn&apos;t say what their answer was. But I found this story to be pretty much decisive evidence for R + L = J because what it proved was that the answer was &lt;i&gt;knowable&lt;/i&gt;. If David and Dan could know it, then the rest of the fan base could, too.&lt;/p&gt;
    
&lt;p&gt;Later I became even more confident when the TV show revealed that R + L = J. (Rarely in life do you get definitive confirmation that your theory is correct!)&lt;/p&gt;
&lt;/details&gt;
  &lt;/li&gt;
&lt;/ol&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;When I was young, I thought the way evolution worked was that a group of apes were born about 500,000 years ago, and these apes lived for hundreds of thousands of years, over which time their bodies slowly morphed to become more and more humanoid, until they became fully human, at which point they birthed human offspring and then died.&lt;/p&gt;

      &lt;p&gt;One time I told my dad that I wish I could’ve gotten to evolve because I wanted to live for 500,000 years. That’s when I learned that that’s not how evolution works. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Carl_Woese&quot;&gt;Carl Woese&lt;/a&gt; defined a six-kingdom taxonomy using evidence from ribosomal RNA in 1977, at which time I believe my 5th grade teacher would’ve been in 2nd grade.&lt;/p&gt;

      &lt;p&gt;Lest I sound like I know what I’m talking about, the only reason I can talk coherently about ribosomal RNA methods for taxonomic classification is because I just read those words off Wikipedia 15 seconds ago. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Fama, E. F., &amp;amp; French, K. R. (1992). &lt;a href=&quot;https://doi.org/10.1111/j.1540-6261.1992.tb04398.x&quot;&gt;The Cross-Section of Expected Stock Returns.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Asness, C. S., Moskowitz, T. J., &amp;amp; Pedersen, L. H. (2012). &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.2174501&quot;&gt;Value and Momentum Everywhere.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Baltussen, G., Swinkels, L., &amp;amp; van Vliet, P. (2019). &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.3325720&quot;&gt;Global Factor Premiums.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;And someone who gets an average score on the SAT is above-average intelligence, because taking the SAT at all already screens off the lower end of the bell curve. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Will Welfareans Get to Experience the Future?</title>
				<pubDate>Sat, 01 Nov 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/11/01/will_welfareans_get_to_experience_the_future/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/11/01/will_welfareans_get_to_experience_the_future/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Epistemic status: This entire essay rests on two controversial premises (linear aggregation and antispeciesism) that I believe are quite robust, but I will not be able to convince anyone that they’re true, so I’m not even going to try.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/gFTHuA3LvrZC2qDgx/will-welfareans-get-to-experience-the-future&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If &lt;a href=&quot;https://www.goodthoughts.blog/p/beneficentrism&quot;&gt;welfare is important&lt;/a&gt;, and if the value of welfare scales something-like-linearly, and if there is nothing morally special about the human species&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, then these two things are probably also true:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The best possible universe isn’t filled with humans or human-like beings. It’s filled with some other type of being that’s much happier than humans, or has much richer experiences than humans, or otherwise experiences much more positive welfare than humans, for whatever “welfare” means. Let’s call these beings Welfareans.&lt;/li&gt;
  &lt;li&gt;A universe filled with Welfareans is &lt;em&gt;much&lt;/em&gt; better than a universe filled with humanoids.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(Historically, people referred to these beings as “hedonium”. I dislike that term because hedonium sounds like a &lt;em&gt;thing&lt;/em&gt;. It doesn’t sound like something that matters. It’s supposed to be the opposite of that—it’s supposed to be the most profoundly innately valuable sentient being. So I think it’s better to describe the beings as Welfareans. I suppose we could also call them Hedoneans, but I don’t want to constrain myself to hedonistic utilitarianism.)&lt;/p&gt;

&lt;p&gt;Even in the “Good Ending” where we solve AI alignment and governance and coordination problems and we end up with a superintelligent AI that builds a flourishing post-scarcity civilization, will there be Welfareans? In that world, humans will be able to create a flourishing future for themselves; but beings who don’t exist yet won’t be able to give themselves good lives, because they don’t exist.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;My guess is that a tiny subset of crazy people (like me) will spend their resources making Welfareans, who will end up occupying only a tiny percentage of the accessible universe, and as a result, the future will be less than 1% as good as it could have been.&lt;/p&gt;

&lt;p&gt;(And maybe my conception of Welfareans will be wrong, and some other weirdo will be the one who makes the &lt;em&gt;real&lt;/em&gt; Welfareans.)&lt;/p&gt;

&lt;p&gt;I want the future to be nice for humans, too. (I’m a human.) But all we need to do is solve AI alignment (and various other extremely difficult, seemingly-insurmountable problems), and humans will turn out fine. Welfareans can’t advocate for themselves, and I’m afraid they won’t get the advocates they need.&lt;/p&gt;

&lt;p&gt;There is one reason why Welfareans might inherit most of the universe. Generally speaking, people don’t care about filling all available space with Dyson spheres to maximize population. They just want to live in their little corner of space, and they’d be happy to let the Welfareans have the rest.&lt;/p&gt;

&lt;p&gt;It’s probably true that most people aren’t maximizers. But &lt;em&gt;some&lt;/em&gt; people are maximizers, and most of them won’t want to maximize Welfareans; they’ll want to maximize some other thing. A lot of people will want to maximize how much of the universe is captured by humans or post-humans (or even just their personal genetic lineage). Mormons will want to maximize the number of Mormons or something. There are enough maximizing ideologies that I expect Welfareans to get squeezed out.&lt;/p&gt;

&lt;p&gt;So what can we do for the Welfareans?&lt;/p&gt;

&lt;p&gt;There are two problems:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Who even &lt;em&gt;are&lt;/em&gt; the Welfareans?&lt;/li&gt;
  &lt;li&gt;How do we ensure that the Welfareans get their share of the future’s resources?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Solving problem #1 approximately requires solving ethics (or, I guess, &lt;a href=&quot;https://en.wikipedia.org/wiki/Value_theory&quot;&gt;axiology&lt;/a&gt;). I’m not going to say more about that problem; I hope we can agree that it’s hard.&lt;/p&gt;

&lt;p&gt;For problem #2, the first answer that comes to mind is “make a power grab for as many resources as possible so I can give them to Welfareans later on”. But I’m guessing that if we solve ethics (as per problem #1), The Solution To Ethics will include a bit that says something along the lines of “don’t take other people’s stuff”. And there are only like three of us who would even care about Welfareans, so I don’t think we’d get very far anyway.&lt;/p&gt;

&lt;p&gt;So how do we increase Welfareans’ share of resources, but in an ethical manner? I don’t know. I’m going to start with “write this essay about Welfarean welfare”.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In my first draft, the opening sentence said “If something like utilitarianism is true, …”. But this is an unnecessarily strong premise. You don’t need utilitarianism, you just need linear aggregation + antispeciesism. A non-consequentialist can still believe that more welfare is better (all else equal). Such a person would still want to maximize the aggregate welfare of the universe, subject to staying within the bounds of whatever moral rules they believe in. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>The Next-Gen LLM Might Pose an Existential Threat</title>
				<pubDate>Wed, 15 Oct 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/10/15/next_gen_LLM_might_pose_existential_threat/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/10/15/next_gen_LLM_might_pose_existential_threat/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;I’m pretty sure that the next generation of LLMs will be safe. But the risk is still high enough to make me uncomfortable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How sure are we that scaling laws are correct?&lt;/strong&gt; Researchers have drawn curves predicting how AI capabilities scale based on how much goes into training them. If you extrapolate those curves, it looks like the next level of LLMs won’t be wildly more powerful than the current level. But maybe there’s a weird bump in the curve that happens in between GPT-5 and GPT-6 (or between Claude 4.5 and Claude 5), and LLMs suddenly become much more capable in a way that scaling laws didn’t predict. I don’t think we can be more than 99.9% confident that there’s not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How sure are we that current-gen LLMs aren’t sandbagging&lt;/strong&gt; (that is, deliberately hiding their true skill level)? I think they’re still dumb enough that their sandbagging can be caught, and indeed they have been caught sandbagging on some tests. I don’t think LLMs are hiding their true capabilities in general, and our understanding of AI capabilities is probably pretty accurate. But I don’t think we can be more than 99.9% confident about that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How sure are we that the extrapolated capability level of the next-gen LLM isn’t enough to take over the world?&lt;/strong&gt; It probably isn’t, but we don’t really know what level of capability is required for something like that. I don’t think we can be more than 99.9% confident.&lt;/p&gt;

&lt;p&gt;Perhaps we can be &amp;gt;99.99% that the extrapolated capability of the next-gen LLM is still not as smart as the smartest human. But an LLM has certain advantages over humans—it can work faster (at least on many sorts of tasks), it can copy itself, it can operate computers in a way that humans can’t.&lt;/p&gt;

&lt;p&gt;Alternatively, GPT-6/Claude 5 might not be able to take over the world, but it might be smart enough to recursively self-improve, and that might happen too quickly for us to do anything about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How sure are we that we aren’t wrong about something else?&lt;/strong&gt; I thought of three ways we could be disastrously wrong:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;We could be wrong about scaling laws;&lt;/li&gt;
  &lt;li&gt;We could be wrong that LLMs aren’t sandbagging;&lt;/li&gt;
  &lt;li&gt;We could be wrong about what capabilities are required for AI to take over.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But we could be wrong about some entirely different thing that I didn’t even think of. I’m not more than 99.9% confident that my list is comprehensive.&lt;/p&gt;

&lt;p&gt;On the whole, I don’t think we can say there’s less than a 0.4% chance that the next-gen LLM forces us down a path that inevitably ends in everyone dying.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Mechanisms Rule Hypotheses Out, But Not In</title>
				<pubDate>Wed, 08 Oct 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/10/08/mechanisms_rule_hypotheses_out_not_in/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/10/08/mechanisms_rule_hypotheses_out_not_in/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;If there is no plausible mechanism by which a scientific hypothesis could be true, then it’s almost certainly false.&lt;/p&gt;

&lt;p&gt;But if there &lt;em&gt;is&lt;/em&gt; a plausible mechanism for a hypothesis, then that only provides weak evidence that it’s true.&lt;/p&gt;

&lt;p&gt;An example of the former:&lt;/p&gt;

&lt;p&gt;Astrology teaches that the positions of planets in the sky when you’re born can affect your life trajectory. If that were true, it would contradict well-established facts in physics and astronomy. Nobody has ever observed a physical mechanism by which astrology could be true.&lt;/p&gt;

&lt;p&gt;An example of the latter:&lt;/p&gt;

&lt;p&gt;A 2023 &lt;a href=&quot;https://news.uthscsa.edu/drinking-diet-sodas-and-aspartame-sweetened-beverages-daily-during-pregnancy-linked-to-autism-in-male-offspring/&quot;&gt;study&lt;/a&gt; found an association between autism and diet soda consumption during pregnancy. The authors’ proposed mechanism is that aspartame (an artificial sweetener found in diet soda) metabolizes into aspartic acid, which has been shown to cause neurological problems in mice. Nonetheless, even though there is a proposed mechanism, I don’t really care and I’m pretty sure diet soda doesn’t cause autism. (For a more thorough take on the diet soda &amp;lt;&amp;gt; autism thing, I will refer you to &lt;a href=&quot;https://dynomight.net/grug/&quot;&gt;Grug&lt;/a&gt;, who is much smarter than me.)&lt;/p&gt;

&lt;h2 id=&quot;why&quot;&gt;Why?&lt;/h2&gt;

&lt;!-- more --&gt;

&lt;p&gt;A lack of mechanism strongly rules out a hypothesis. If astrology were true, that would overturn some extremely well-established findings in physics. How could astrology possibly be true, given what we know about the laws of gravity?&lt;/p&gt;

&lt;p&gt;Perhaps scientists have overlooked something. Perhaps the planets affect humans not via gravity but via some fifth as-yet-discovered &lt;a href=&quot;https://en.wikipedia.org/wiki/Fundamental_interaction&quot;&gt;fundamental force&lt;/a&gt;. But if astrologers can detect the fifth force, why haven’t physicists noticed it with all their careful experimentation?&lt;/p&gt;

&lt;p&gt;On the other hand, the &lt;em&gt;existence&lt;/em&gt; of a mechanism doesn’t count for much. I often see this in biology, where someone proposes a contrarian hypothesis with a possible biological mechanism but no supporting evidence from randomized experiments. I don’t take that sort of evidence very seriously. Biology is complicated, and chemicals have all sorts of effects on bodies, and it’s very hard to predict whether those effects are net good or bad just by looking at mechanisms.&lt;/p&gt;

&lt;p&gt;For example, did you know that exercise increases inflammation? And inflammation is bad for you? And yet, exercise is good for you, because the acute inflammation caused by exercise is strongly outweighed by the long-term beneficial effects.&lt;/p&gt;

&lt;p&gt;However, when a hypothesis has supporting evidence from experiments but a &lt;em&gt;lack&lt;/em&gt; of plausible mechanism, I disbelieve the research. &lt;a href=&quot;https://en.wikipedia.org/wiki/Ganzfeld_experiment&quot;&gt;Experiments have demonstrated&lt;/a&gt; that people have psychic abilities. But I’m quite confident that people &lt;em&gt;don’t&lt;/em&gt; have psychic abilities because &lt;em&gt;there is no mechanism by which that could be true.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the hierarchy of evidence, experiment beats mechanism, but lack of mechanism beats experiment.&lt;/p&gt;

&lt;p&gt;This asymmetry is consistent with the law of &lt;a href=&quot;https://www.lesswrong.com/w/conservation-of-expected-evidence&quot;&gt;Conservation of Expected Evidence&lt;/a&gt;. There are many plausible mechanisms out there in the world. A hypothesis &lt;em&gt;must&lt;/em&gt; have a mechanism for it to be true, but the &lt;em&gt;existence&lt;/em&gt; of a mechanism does not come anywhere close to proving a hypothesis correct.&lt;/p&gt;

&lt;h2 id=&quot;some-more-examples&quot;&gt;Some more examples&lt;/h2&gt;

&lt;p&gt;Here are some more hypotheses that are strongly ruled out by a lack of plausible mechanism:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Some houses are haunted by ghosts.&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Dowsing&quot;&gt;Dowsing rods&lt;/a&gt; can detect underground water.&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Fortune-tellers can predict the future.&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Homeopathy&quot;&gt;Homeopathy&lt;/a&gt; can cure diseases.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some hypotheses with plausible mechanisms that I nonetheless believe are false:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Seed oils are bad for you because they contain linoleic acid, which causes inflammation.&lt;/strong&gt; This mechanism is true (as far as I know), but experiments comparing unsaturated fats (mainly seed oils) to saturated fats find that people who eat more of the former end up healthier; see &lt;a href=&quot;https://doi.org/10.1002/14651858.CD011737.pub3&quot;&gt;Hooper et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://iris.who.int/bitstream/handle/10665/246104/9789241565349-eng.pdf&quot;&gt;WHO (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;. Experimental evidence indicates that seed oils have overall &lt;em&gt;positive&lt;/em&gt; health effects.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Eating excess protein causes osteoporosis.&lt;/strong&gt; The proposed mechanism is that proteins increase blood acidity which causes the body to extract calcium from bones to balance out this acidity. And indeed, people on high-protein diets excrete more calcium in their urine. But randomized controlled trials have found that adding protein to the diet &lt;em&gt;reduces&lt;/em&gt; the risk of bone fracture (&lt;a href=&quot;https://doi.org/10.1080/08952841.2018.1418822&quot;&gt;Koutsofta et al. (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;).&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;
    &lt;ul&gt;
      &lt;li&gt;Relatedly, you may hear some people say you should eat more alkaline foods to fix your body’s pH balance. It would indeed be bad if your body’s pH became too low, but the empirical evidence shows that dietary pH does not affect your body’s pH in that way (see &lt;a href=&quot;https://en.wikipedia.org/wiki/Alkaline_diet&quot;&gt;Wikipedia&lt;/a&gt;).&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Sugar causes hyperactivity in children because it provides a short-term burst of energy.&lt;/strong&gt; This mechanism is intuitive even if you don’t know much biology. But it’s not true—RCTs have consistently found no connection between hyperactivity and sugar consumption (&lt;a href=&quot;https://doi.org/10.1136/bmj.a2769&quot;&gt;Vreeman &amp;amp; Carroll (2008)&lt;/a&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Eating cholesterol raises your blood cholesterol.&lt;/strong&gt; The mechanism in this case is obvious: you eat food that contains cholesterol, and the cholesterol goes into your body. But your body regulates its own cholesterol production, and your blood cholesterol levels don’t have much to do with how much cholesterol you eat.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hooper, L., Martin, N., Jimoh, O. F., Kirk, C., Foster, E., &amp;amp; Abdelhamid, A. S. (2020). &lt;a href=&quot;https://doi.org/10.1002/14651858.CD011737.pub3&quot;&gt;Reduction in saturated fat intake for cardiovascular disease.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1002/14651858.cd011737.pub3&quot;&gt;10.1002/14651858.cd011737.pub3&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mensink, R. P., &amp;amp; World Health Organization (2016). &lt;a href=&quot;https://iris.who.int/bitstream/handle/10665/246104/9789241565349-eng.pdf&quot;&gt;Effects of saturated fatty acids on serum lipids and lipoproteins: a systematic review and regression analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Koutsofta, I., Mamais, I., &amp;amp; Chrysostomou, S. (2018). &lt;a href=&quot;https://doi.org/10.1080/08952841.2018.1418822&quot;&gt;The effect of protein diets in postmenopausal women with osteoporosis: Systematic review of randomized controlled trials.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I heard about this research on the &lt;a href=&quot;https://www.youtube.com/watch?v=O0IK3ap4wQY&quot;&gt;Iron Culture podcast&lt;/a&gt;, in which they went on to complain about how people care too much about mechanisms and ignore experimental evidence. It got me thinking about an apparent contradiction in my beliefs where I care a lot about mechanisms for ruling out astrology and ESP, but I don’t really care about mechanisms in nutrition or exercise science. After thinking about it, I realized that my position is perfectly sensible—it’s about using mechanisms to rule hypotheses &lt;em&gt;out&lt;/em&gt; vs. &lt;em&gt;in&lt;/em&gt;—and that’s how I came up with the idea to write this post. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I want to be careful not to say that an alkaline diet is unhealthy. Alkaline foods do tend to be particularly healthy—they’re mostly fruits and vegetables—but that’s coincidental, not because they’re alkaline per se. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Vreeman, R. C., &amp;amp; Carroll, A. E. (2008). &lt;a href=&quot;https://doi.org/10.1136/bmj.a2769&quot;&gt;Festive medical myths.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://nutritionsource.hsph.harvard.edu/what-should-you-eat/fats-and-cholesterol/cholesterol/&quot;&gt;https://nutritionsource.hsph.harvard.edu/what-should-you-eat/fats-and-cholesterol/cholesterol/&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>How Much Does It Cost to Offset an LLM Subscription?</title>
				<pubDate>Sat, 04 Oct 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/10/04/cost_to_offset_LLM_subscription/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/10/04/cost_to_offset_LLM_subscription/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Is &lt;a href=&quot;https://forum.effectivealtruism.org/topics/moral-offsetting&quot;&gt;moral offsetting&lt;/a&gt; a good idea? Is it ethical to spend money on something harmful, and then donate to a charity that works to counteract those harms?&lt;/p&gt;

&lt;p&gt;I’m not going to answer that question. Instead I’m going to ask a different question: if you use an LLM, how much do you have to donate to AI safety to offset the harm of using an LLM?&lt;/p&gt;

&lt;p&gt;I can’t give a definitive answer, of course. But I can make an educated guess, and my educated guess is that for every $1 spent on an LLM subscription, you need to donate $0.87 to AI safety charities.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;First things first: Why do I believe it’s harmful to buy an LLM subscription?&lt;/p&gt;

&lt;p&gt;Paying money to a frontier AI company increases their revenue, and they spend some of that revenue on building more powerful AI systems. Eventually, they build a superintelligent AI. That AI has a good chance of being misaligned and then &lt;a href=&quot;https://intelligence.org/briefing/&quot;&gt;killing everyone in the world&lt;/a&gt;. When you buy an LLM subscription, you cause that to happen slightly faster.&lt;/p&gt;

&lt;p&gt;But you can also donate to nonprofits that are working to prevent AI from killing everyone. How much do you need to donate to a nonprofit to offset the harm of a $20/month LLM subscription?&lt;/p&gt;

&lt;p&gt;I built a simple &lt;a href=&quot;https://squigglehub.org/models/AI-safety/LLM-subscription-offsets&quot;&gt;Squiggle model&lt;/a&gt; to answer that question.&lt;/p&gt;

&lt;h2 id=&quot;the-model&quot;&gt;The model&lt;/h2&gt;

&lt;p&gt;Four key facts:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;When you give a company an additional dollar of revenue, that raises its future valuation by some number.&lt;/li&gt;
  &lt;li&gt;A higher valuation lets the company raise more capital and thus spend some additional amount of money.&lt;/li&gt;
  &lt;li&gt;AI companies will spend a total of some amount in 2026.&lt;/li&gt;
  &lt;li&gt;Meanwhile, it would take some amount of money directed to AI safety nonprofits to cancel out the harm of AI companies’ spending.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From those, you can estimate how much you need to donate using the following procedure:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Start from the dollar value of your subscription.&lt;/li&gt;
  &lt;li&gt;Calculate how much that will increase company valuation.&lt;/li&gt;
  &lt;li&gt;Translate increased valuation into increased expenditures.&lt;/li&gt;
  &lt;li&gt;Divide by expected total expenditures of frontier AI companies.&lt;/li&gt;
  &lt;li&gt;Multiply by expected total cost to offset AI company harm.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The resulting number is the amount to donate to AI safety nonprofits.&lt;/p&gt;

&lt;p&gt;There are some difficult questions that this model avoids having to answer:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;We don’t care what proportion of AI company spending goes to R&amp;amp;D on frontier models; we only care about total spending.&lt;/li&gt;
  &lt;li&gt;We don’t care to what extent x-risk is increased or decreased per dollar spent.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-inputs&quot;&gt;The inputs&lt;/h2&gt;

&lt;p&gt;The model has four inputs: (1) the revenue-to-valuation ratio; (2) the valuation-to-expenditures ratio; (3) frontier AI company expenditures; (4) total cost to offset AI company harm. In this section, I will explain how I estimated the values of those inputs.&lt;/p&gt;

&lt;h3 id=&quot;revenue-to-valuation-ratio&quot;&gt;Revenue-to-valuation ratio&lt;/h3&gt;

&lt;p&gt;How much does a dollar of revenue raise an AI company’s valuation? I can see arguments for both “hardly at all” and “a lot”.&lt;/p&gt;

&lt;p&gt;In favor of “hardly at all”: VCs give AI companies funding on the expectation that their products will be incredibly useful in the future, which doesn’t have much to do with current revenue.&lt;/p&gt;

&lt;p&gt;In favor of “a lot”: AI companies raise funding at high revenue multiples, e.g. Anthropic raised its last round (as of September 2025) at 36x revenue (&lt;a href=&quot;https://www.anthropic.com/news/anthropic-raises-series-f-at-usd183b-post-money-valuation&quot;&gt;source&lt;/a&gt;). This could mean that VCs expect $1 of revenue today to convert to $36 in future value, i.e. revenue has a 36:1 multiplier effect.&lt;/p&gt;

&lt;p&gt;A typical 2025 startup valuation is 7x revenue (&lt;a href=&quot;https://www.saas-capital.com/blog-posts/private-saas-company-valuations-multiples/&quot;&gt;source&lt;/a&gt;). As a median estimate, we could say that $1 of AI company revenue converts to $7 of valuation, and the extra 5x multiplier is driven by high expectations for future AI products.&lt;/p&gt;

&lt;p&gt;(I briefly looked into how startup funding scales with revenue and I didn’t find any useful evidence.)&lt;/p&gt;

&lt;p&gt;Growth rate matters more for valuation than revenue does, but I don’t think this changes the calculation in the short term because an extra $1 of 2025 revenue also represents an extra $1 in growth relative to 2024 revenue.&lt;/p&gt;

&lt;h3 id=&quot;valuation-to-expenditures-ratio&quot;&gt;Valuation-to-expenditures ratio&lt;/h3&gt;

&lt;p&gt;How much does $1 of company valuation translate into increased expenditures?&lt;/p&gt;

&lt;p&gt;Private companies don’t usually publish that information. But based on historical data for AI companies and general trends for startups, it’s reasonable to expect companies to raise capital equal to 5% to 20% of the valuation.&lt;/p&gt;

&lt;p&gt;(I’m thinking of AI companies as startups; “startup” connotes “small”, which they clearly aren’t, but I’m using the term in the Paul Graham &lt;a href=&quot;https://paulgraham.com/growth.html&quot;&gt;startup = growth&lt;/a&gt; sense. Frontier AI companies are startups because they’re growing fast.)&lt;/p&gt;

&lt;h3 id=&quot;frontier-ai-company-expenditures&quot;&gt;Frontier AI company expenditures&lt;/h3&gt;

&lt;p&gt;Public data on AI company fundraising in 2025:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://tracxn.com/d/companies/anthropic/__SzoxXDMin-NK5tKB7ks8yHr6S9Mz68pjVCzFEcGFZ08/funding-and-investors#funding-rounds&quot;&gt;Anthropic&lt;/a&gt;: $13B&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://tracxn.com/d/companies/openai/__kElhSG7uVGeFk1i71Co9-nwFtmtyMVT7f-YHMn4TFBg/funding-and-investors&quot;&gt;OpenAI&lt;/a&gt;: $40B&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://tracxn.com/d/companies/xai/__saKrxbHN3TRWW-I4lYH6zkx6N5P_kMTqlLcKTzWs2ug#about-the-company&quot;&gt;xAI&lt;/a&gt;: $10B maybe? (the publicly available data only shows total funding, not individual rounds)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Assume these three companies account for half of AI spending and that the funding they raised will last 18 months; that means AI companies will spend $66B in 2026.&lt;/p&gt;

&lt;h3 id=&quot;total-cost-to-offset-ai-company-harm&quot;&gt;Total cost to offset AI company harm&lt;/h3&gt;

&lt;p&gt;This is the hardest number to estimate. My assumption is that the AI safety community currently spends on the order of $30 million to $100 million per year, and if we spent on the order of 10–100x more, then that would be enough to fully offset the harms of AI companies.&lt;/p&gt;

&lt;p&gt;I suspect that spending 100x more on pure alignment research would not be enough. But spending 100x more would likely be enough if some of the spending goes to governance/policy/advocacy, and some goes to things that have multiplier effects (e.g. you could spend $1 to cause AI companies to contribute $10 more to safety research). I’m also assuming you can make AI safe merely by throwing money at the problem, which is clearly false, but it makes sense to assume it’s true for the purposes of this model.&lt;/p&gt;

&lt;h3 id=&quot;the-answer-according-to-my-model&quot;&gt;The answer (according to my model)&lt;/h3&gt;

&lt;p&gt;Put all those numbers together and the &lt;a href=&quot;https://squigglehub.org/models/AI-safety/LLM-subscription-offsets&quot;&gt;model&lt;/a&gt; spits out a mean cost of $0.87 in donations for every $1 spent on LLM subscriptions. That means for a $20/month subscription, according to the model you’d need to donate $17/month to AI safety orgs.&lt;/p&gt;

&lt;p&gt;The model’s &lt;em&gt;median&lt;/em&gt; estimate is only $0.06—which is to say, an LLM subscription probably only does a little bit of harm. But there is a small probability that you need to donate quite a bit more to offset your LLM usage, so the &lt;em&gt;expected&lt;/em&gt; cost is much higher at $0.87.&lt;/p&gt;

&lt;h2 id=&quot;limitations-of-the-model&quot;&gt;Limitations of the model&lt;/h2&gt;

&lt;p&gt;Like any model, this one does not perfectly match reality. Some examples of problems this model has:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;I have no clue what the total cost is to offset the harm of AI companies.&lt;/li&gt;
  &lt;li&gt;The model assumes they money you donate does as much good as the average dollar spent on AI safety. But maybe your dollars can be above average. (Or they could even be below average.)&lt;/li&gt;
  &lt;li&gt;Maybe giving more money to Anthropic is good actually, because Anthropic is the least unsafe AI company and speeding them up improves our chances.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Is moral offsetting even okay? Maybe we should obey a rule-utilitarian constraint against doing bad things, even if we offset them. Or maybe moral offsetting is silly and we should just donate to whatever charity is most effective.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;I have a subscription to Claude. Last year I donated a lot of money to AI safety but I didn’t make any donations specifically for offsetting. Having put more thought into it to write this post, I think I will start donating an extra $240/year—$1 donated for every $1 spent on Claude. My model suggested donating 87 cents per dollar, but the model isn’t that precise, and $1-per-dollar is a nice round number. I’m still undecided on whether the concept of moral offsetting makes sense, but I figure I might as well do it.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In my first draft, I also said it might be net good to give AI companies money if they’ll use some of it on alignment research. But on reflection, I’m pretty sure that’s wrong, because giving them money speeds up AI progress, and there’s no strong reason to expect that increasing AI company revenue will increase &lt;em&gt;total&lt;/em&gt; expenditures on alignment.&lt;/p&gt;

      &lt;p&gt;I also expect it’s bad to speed up Anthropic, but I’m not confident about that. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>I Made an Emacs Extension That Displays Magic: the Gathering Card Tooltips</title>
				<pubDate>Fri, 03 Oct 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/10/03/mtg_emacs/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/10/03/mtg_emacs/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;This post is about the niche intersection of Emacs and Magic: the Gathering.&lt;/p&gt;

&lt;p&gt;I considered not writing this because I figured, surely if you multiply the proportion of people who play Magic by the proportion of people who use Emacs, you get a very small number. But then I thought, those two variables are probably not independent. And the intersection of &lt;code&gt;Magic players&lt;/code&gt; x &lt;code&gt;Emacs users&lt;/code&gt; x &lt;code&gt;people who read my blog&lt;/code&gt; might actually be greater than zero. So if you’re out there, this post is for you.&lt;/p&gt;

&lt;p&gt;Do you like how MTG websites like &lt;a href=&quot;https://magic.gg/&quot;&gt;magic.gg&lt;/a&gt; and &lt;a href=&quot;https://mtg.wiki/&quot;&gt;mtg.wiki&lt;/a&gt; let you mouse over a card name to see a picture of the card? Well, I wrote an Emacs extension that replicates that functionality.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;Here is the code: &lt;a href=&quot;https://github.com/michaeldickens/emacs-mtg&quot;&gt;https://github.com/michaeldickens/emacs-mtg&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://github.com/michaeldickens/emacs-mtg&quot;&gt;README&lt;/a&gt; on GitHub pretty much explains how it works, so the rest of this post is just gonna repeat what it says in the README.&lt;/p&gt;

&lt;h2 id=&quot;usage&quot;&gt;Usage&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;pre&gt;&lt;code&gt;(add-to-list &apos;load-path /path/to/mtg.el)
(require &apos;mtg)
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;This module allows you to refer to Magic cards in Org Mode using a new type of link prefixed with &lt;code&gt;mtg:&lt;/code&gt;. For example:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[[mtg:Black Lotus]] might be the strongest card in my collection, but my personal favorite is [[mtg:Grizzly Bears]].&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When Org Mode sees a link to an MTG card, it will do the following:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;If the card is not downloaded yet, download the card by querying the &lt;a href=&quot;https://scryfall.com/&quot;&gt;Scryfall&lt;/a&gt; API for a card with the given name.&lt;/li&gt;
  &lt;li&gt;When you open the link (using &lt;code&gt;org-open-at-point&lt;/code&gt; or &lt;code&gt;C-c C-o&lt;/code&gt;), Emacs displays an image of the card in the minibuffer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s how it looks:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://raw.githubusercontent.com/michaeldickens/emacs-mtg/refs/heads/master/example-grizzly-bears.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;By default, card images and data are downloaded to &lt;code&gt;~/.emacs.d/mtg-cards/&lt;/code&gt;, but you can change this by customizing the variable &lt;code&gt;mtg/db-path&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Scryfall’s API has fuzzy name matching, so for example &lt;code&gt;[[mtg:blac lotus]]&lt;/code&gt; will display Black Lotus.&lt;/p&gt;

&lt;h2 id=&quot;card-legality&quot;&gt;Card legality&lt;/h2&gt;

&lt;p&gt;Cards are displayed with a red tint if they are illegal in the preferred format. It looks like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://raw.githubusercontent.com/michaeldickens/emacs-mtg/refs/heads/master/example-black-lotus.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;When checking legality, this module uses Standard format by default, but you can customize it by setting the variable &lt;code&gt;mtg/default-format&lt;/code&gt;. You can also set file-local or heading-local formats in Org Mode using the &lt;code&gt;:MTG_FORMAT:&lt;/code&gt; property. For example:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;pre&gt;&lt;code&gt;:PROPERTIES:
:MTG_FORMAT: standard
:END:
If you open this link --&amp;gt; [[mtg:Black Lotus]], the card will appear
with a red tint because it&apos;s illegal in Standard.

** My vintage cards
  :PROPERTIES:
  :MTG_FORMAT: vintage
  :END:
  [[mtg:Black Lotus]] is legal in Vintage, so here it will
  appear with no tint.
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;Note: Adding a red tint requires &lt;a href=&quot;https://imagemagick.org/&quot;&gt;ImageMagick&lt;/a&gt;. If you don’t have ImageMagick installed, all cards will be displayed as if they’re legal.&lt;/p&gt;

&lt;h2 id=&quot;exporting-to-html&quot;&gt;Exporting to HTML&lt;/h2&gt;

&lt;p&gt;If you export Org Mode files to HTML, you can make the MTG card links display images on hover. For this to work, you must include some custom CSS in your Org Mode file.&lt;/p&gt;

&lt;p&gt;On GitHub there is a file called &lt;a href=&quot;https://github.com/michaeldickens/emacs-mtg/blob/master/export-style.setup&quot;&gt;export-style.setup&lt;/a&gt; that includes some custom CSS. To include this custom CSS in Org Mode, put this line at the top of your Org Mode file:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;pre&gt;&lt;code&gt;#+SETUPFILE: /path/to/export-style.setup
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then call &lt;code&gt;org-export-dispatch&lt;/code&gt; to export the Org file to HTML.&lt;/p&gt;

&lt;h2 id=&quot;table-utilities&quot;&gt;Table utilities&lt;/h2&gt;

&lt;p&gt;mtg.el comes with functions for working with Org Mode tables. The functions assume you have a table where one column contains links to MTG cards, like this:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;[[mtg:Black Lotus]]&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;[[mtg:Grizzly Bears]]&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;[[mtg:Colossal Dreadmaw]]&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;code&gt;mtg/table-sort-by-property&lt;/code&gt; takes a property as a string (such as “name”, “rarity”, or “color”) and sorts the table by looking up that property for each card. This only works if you’ve already downloaded the card info (which happens when you view the card or export the whole file).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;mtg/table-insert-column&lt;/code&gt; takes a property as a string and inserts a new column containing that property for each card. For example, calling &lt;code&gt;(mtg/table-insert-column &quot;rarity&quot;)&lt;/code&gt; on the table above produces this:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;[[mtg:Black Lotus]]&lt;/td&gt;
      &lt;td&gt;bonus&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;[[mtg:Grizzly Bears]]&lt;/td&gt;
      &lt;td&gt;common&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;[[mtg:Colossal Dreadmaw]]&lt;/td&gt;
      &lt;td&gt;common&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;If a property is missing, the cell will be left blank. For example, calling &lt;code&gt;(mtg/table-insert-column &quot;power&quot;)&lt;/code&gt; produces&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;[[mtg:Black Lotus]]&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;[[mtg:Grizzly Bears]]&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;[[mtg:Colossal Dreadmaw]]&lt;/td&gt;
      &lt;td&gt;6&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;You can also call &lt;code&gt;mtg/get-property&lt;/code&gt; to return a property for the card at point.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>AI Safety Landscape and Strategic Gaps</title>
				<pubDate>Fri, 19 Sep 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/09/19/ai_safety_landscape/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/09/19/ai_safety_landscape/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;I wrote a &lt;a href=&quot;https://forum.effectivealtruism.org/posts/CbHX5zL2uEvTasuiP/ai-safety-landscape-and-strategic-gaps&quot;&gt;report&lt;/a&gt; giving a high-level review of what work people are doing in AI safety. The report specifically focused on two areas: AI policy/advocacy and non-human welfare (including animals and digital minds).&lt;/p&gt;

&lt;p&gt;You can read the report below. I was commissioned to write it by Rethink Priorities, but beliefs are my own.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h1 id=&quot;contents&quot;&gt;Contents&lt;/h1&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#prelude&quot; id=&quot;markdown-toc-prelude&quot;&gt;Prelude&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#some-positions-im-going-to-take-as-given&quot; id=&quot;markdown-toc-some-positions-im-going-to-take-as-given&quot;&gt;Some positions I’m going to take as given&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#definitions&quot; id=&quot;markdown-toc-definitions&quot;&gt;Definitions&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#prioritization&quot; id=&quot;markdown-toc-prioritization&quot;&gt;Prioritization&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#why-not-technical-safety-research&quot; id=&quot;markdown-toc-why-not-technical-safety-research&quot;&gt;Why not technical safety research?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#why-not-ai-policy-research&quot; id=&quot;markdown-toc-why-not-ai-policy-research&quot;&gt;Why not AI policy research?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#downsides-of-ai-policyadvocacy-and-why-theyre-not-too-big&quot; id=&quot;markdown-toc-downsides-of-ai-policyadvocacy-and-why-theyre-not-too-big&quot;&gt;Downsides of AI policy/advocacy (and why they’re not too big)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#what-kinds-of-policies-might-reduce-ai-x-risk&quot; id=&quot;markdown-toc-what-kinds-of-policies-might-reduce-ai-x-risk&quot;&gt;What kinds of policies might reduce AI x-risk?&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#some-ai-policy-ideas-i-like&quot; id=&quot;markdown-toc-some-ai-policy-ideas-i-like&quot;&gt;Some AI policy ideas I like&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#maybe-prioritizing-post-tai-animal-welfare&quot; id=&quot;markdown-toc-maybe-prioritizing-post-tai-animal-welfare&quot;&gt;Maybe prioritizing post-TAI animal welfare&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#why-not-prioritize-digital-minds--s-risks--moral-error--better-futures--ai-misuse-x-risk--gradual-disempowerment&quot; id=&quot;markdown-toc-why-not-prioritize-digital-minds--s-risks--moral-error--better-futures--ai-misuse-x-risk--gradual-disempowerment&quot;&gt;Why not prioritize digital minds / S-risks / moral error / better futures / AI misuse x-risk / gradual disempowerment?&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#whos-working-on-them&quot; id=&quot;markdown-toc-whos-working-on-them&quot;&gt;Who’s working on them?&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#some-relevant-research-agendas&quot; id=&quot;markdown-toc-some-relevant-research-agendas&quot;&gt;Some relevant research agendas&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#general-recommendations&quot; id=&quot;markdown-toc-general-recommendations&quot;&gt;General recommendations&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#advocacy-should-emphasize-x-risk-and-misalignment-risk&quot; id=&quot;markdown-toc-advocacy-should-emphasize-x-risk-and-misalignment-risk&quot;&gt;Advocacy should emphasize x-risk and misalignment risk&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#prioritize-work-that-pays-off-if-timelines-are-short&quot; id=&quot;markdown-toc-prioritize-work-that-pays-off-if-timelines-are-short&quot;&gt;Prioritize work that pays off if timelines are short&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#top-project-ideas&quot; id=&quot;markdown-toc-top-project-ideas&quot;&gt;Top project ideas&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#talk-to-policy-makers-about-ai-x-risk&quot; id=&quot;markdown-toc-talk-to-policy-makers-about-ai-x-risk&quot;&gt;Talk to policy-makers about AI x-risk&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#write-ai-x-risk-legislation&quot; id=&quot;markdown-toc-write-ai-x-risk-legislation&quot;&gt;Write AI x-risk legislation&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#advocate-to-change-ai-training-to-make-llms-more-animal-friendly&quot; id=&quot;markdown-toc-advocate-to-change-ai-training-to-make-llms-more-animal-friendly&quot;&gt;Advocate to change AI training to make LLMs more animal-friendly&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#develop-new-plans--evaluate-existing-plans-to-improve-post-tai-animal-welfare&quot; id=&quot;markdown-toc-develop-new-plans--evaluate-existing-plans-to-improve-post-tai-animal-welfare&quot;&gt;Develop new plans / evaluate existing plans to improve post-TAI animal welfare&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#honorable-mentions&quot; id=&quot;markdown-toc-honorable-mentions&quot;&gt;Honorable mentions&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#directly-push-for-an-international-ai-treaty&quot; id=&quot;markdown-toc-directly-push-for-an-international-ai-treaty&quot;&gt;Directly push for an international AI treaty&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#organize-a-voluntary-commitment-by-ai-scientists-not-to-build-advanced-ai&quot; id=&quot;markdown-toc-organize-a-voluntary-commitment-by-ai-scientists-not-to-build-advanced-ai&quot;&gt;Organize a voluntary commitment by AI scientists not to build advanced AI&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#peaceful-protests&quot; id=&quot;markdown-toc-peaceful-protests&quot;&gt;Peaceful protests&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#media-about-dangers-of-ai&quot; id=&quot;markdown-toc-media-about-dangers-of-ai&quot;&gt;Media about dangers of AI&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#message-testing&quot; id=&quot;markdown-toc-message-testing&quot;&gt;Message testing&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#host-a-website-for-discussion-of-ai-safety-and-other-important-issues&quot; id=&quot;markdown-toc-host-a-website-for-discussion-of-ai-safety-and-other-important-issues&quot;&gt;Host a website for discussion of AI safety and other important issues&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#list-of-other-project-ideas&quot; id=&quot;markdown-toc-list-of-other-project-ideas&quot;&gt;List of other project ideas&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#ai-for-animals-ideas&quot; id=&quot;markdown-toc-ai-for-animals-ideas&quot;&gt;AI-for-animals ideas&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#neartermist-animal-advocacy&quot; id=&quot;markdown-toc-neartermist-animal-advocacy&quot;&gt;Neartermist animal advocacy&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#using-tai-to-improve-farm-animal-welfare&quot; id=&quot;markdown-toc-using-tai-to-improve-farm-animal-welfare&quot;&gt;Using TAI to improve farm animal welfare&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#lobby-governments-to-include-animal-welfare-in-ai-regulations&quot; id=&quot;markdown-toc-lobby-governments-to-include-animal-welfare-in-ai-regulations&quot;&gt;Lobby governments to include animal welfare in AI regulations&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#traditional-animal-advocacy-targeted-at-frontier-ai-developers&quot; id=&quot;markdown-toc-traditional-animal-advocacy-targeted-at-frontier-ai-developers&quot;&gt;Traditional animal advocacy targeted at frontier AI developers&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#research-which-alignment-strategies-are-more-likely-to-be-good-for-animals&quot; id=&quot;markdown-toc-research-which-alignment-strategies-are-more-likely-to-be-good-for-animals&quot;&gt;Research which alignment strategies are more likely to be good for animals&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ai-policyadvocacy-ideas&quot; id=&quot;markdown-toc-ai-policyadvocacy-ideas&quot;&gt;AI policy/advocacy ideas&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#improving-us--china-relations--international-peace&quot; id=&quot;markdown-toc-improving-us--china-relations--international-peace&quot;&gt;Improving US &amp;lt;&amp;gt; China relations / international peace&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#talk-to-international-peace-orgs-about-ai&quot; id=&quot;markdown-toc-talk-to-international-peace-orgs-about-ai&quot;&gt;Talk to international peace orgs about AI&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#increasing-government-expertise-about-ai&quot; id=&quot;markdown-toc-increasing-government-expertise-about-ai&quot;&gt;Increasing government expertise about AI&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#policyadvocacy-in-china&quot; id=&quot;markdown-toc-policyadvocacy-in-china&quot;&gt;Policy/advocacy in China&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#corporate-campaigns-to-advocate-for-safety&quot; id=&quot;markdown-toc-corporate-campaigns-to-advocate-for-safety&quot;&gt;Corporate campaigns to advocate for safety&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#develop-ai-safetysecurityevaluation-standards&quot; id=&quot;markdown-toc-develop-ai-safetysecurityevaluation-standards&quot;&gt;Develop AI safety/security/evaluation standards&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#slow-down-chinese-ai-development-via-ordinary-foreign-policy&quot; id=&quot;markdown-toc-slow-down-chinese-ai-development-via-ordinary-foreign-policy&quot;&gt;Slow down Chinese AI development via ordinary foreign policy&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#whistleblower-protectionsupport&quot; id=&quot;markdown-toc-whistleblower-protectionsupport&quot;&gt;Whistleblower protection/support&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#opinion-polling&quot; id=&quot;markdown-toc-opinion-polling&quot;&gt;Opinion polling&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#help-ai-company-employees-improve-safety-within-their-companies&quot; id=&quot;markdown-toc-help-ai-company-employees-improve-safety-within-their-companies&quot;&gt;Help AI company employees improve safety within their companies&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#direct-talks-with-ai-companies-to-make-them-safer&quot; id=&quot;markdown-toc-direct-talks-with-ai-companies-to-make-them-safer&quot;&gt;Direct talks with AI companies to make them safer&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#monitor-ai-companies-on-safety-standards&quot; id=&quot;markdown-toc-monitor-ai-companies-on-safety-standards&quot;&gt;Monitor AI companies on safety standards&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#create-a-petition-or-open-letter-on-ai-risk&quot; id=&quot;markdown-toc-create-a-petition-or-open-letter-on-ai-risk&quot;&gt;Create a petition or open letter on AI risk&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#create-demonstrations-of-dangerous-ai-capabilities&quot; id=&quot;markdown-toc-create-demonstrations-of-dangerous-ai-capabilities&quot;&gt;Create demonstrations of dangerous AI capabilities&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#sue-openai-for-violating-its-nonprofit-mission&quot; id=&quot;markdown-toc-sue-openai-for-violating-its-nonprofit-mission&quot;&gt;Sue OpenAI for violating its nonprofit mission&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#send-people-ai-safety-books&quot; id=&quot;markdown-toc-send-people-ai-safety-books&quot;&gt;Send people AI safety books&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ai-research-ideas&quot; id=&quot;markdown-toc-ai-research-ideas&quot;&gt;AI research ideas&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#research-on-how-to-get-people-to-extrapolate&quot; id=&quot;markdown-toc-research-on-how-to-get-people-to-extrapolate&quot;&gt;Research on how to get people to extrapolate&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#investigate-how-to-use-ai-to-reduce-other-x-risks&quot; id=&quot;markdown-toc-investigate-how-to-use-ai-to-reduce-other-x-risks&quot;&gt;Investigate how to use AI to reduce other x-risks&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#a-short-timelines-alignment-plan-that-doesnt-rely-on-bootstrapping&quot; id=&quot;markdown-toc-a-short-timelines-alignment-plan-that-doesnt-rely-on-bootstrapping&quot;&gt;A short-timelines alignment plan that doesn’t rely on bootstrapping&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#rigorous-analysis-of-the-various-ways-alignment-bootstrapping-could-fail&quot; id=&quot;markdown-toc-rigorous-analysis-of-the-various-ways-alignment-bootstrapping-could-fail&quot;&gt;Rigorous analysis of the various ways alignment bootstrapping could fail&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#future-work&quot; id=&quot;markdown-toc-future-work&quot;&gt;Future work&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#pros-and-cons-of-slowing-down-ai-development-with-numeric-credences&quot; id=&quot;markdown-toc-pros-and-cons-of-slowing-down-ai-development-with-numeric-credences&quot;&gt;Pros and cons of slowing down AI development, with numeric credences&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#quantitative-model-on-ai-x-risk-vs-other-x-risks&quot; id=&quot;markdown-toc-quantitative-model-on-ai-x-risk-vs-other-x-risks&quot;&gt;Quantitative model on AI x-risk vs. other x-risks&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#deeper-investigation-of-the-ai-arms-race-situation&quot; id=&quot;markdown-toc-deeper-investigation-of-the-ai-arms-race-situation&quot;&gt;Deeper investigation of the AI arms race situation&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#does-slowing-downpausing-ai-help-solve-non-alignment-problems&quot; id=&quot;markdown-toc-does-slowing-downpausing-ai-help-solve-non-alignment-problems&quot;&gt;Does slowing down/pausing AI help solve non-alignment problems?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#determine-when-will-be-the-right-time-to-push-for-strong-restrictions-on-ai-if-not-now&quot; id=&quot;markdown-toc-determine-when-will-be-the-right-time-to-push-for-strong-restrictions-on-ai-if-not-now&quot;&gt;Determine when will be the right time to push for strong restrictions on AI (if not now)&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#supplements&quot; id=&quot;markdown-toc-supplements&quot;&gt;Supplements&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;This report was prompted by two questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;What are some things we can do to make transformative AI go well?&lt;/li&gt;
  &lt;li&gt;What are a few high-priority projects that deserve more attention?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I reviewed the AI safety landscape, starting by &lt;a href=&quot;#prioritization&quot;&gt;prioritizing&lt;/a&gt; to narrow my focus to areas that look particularly promising and feasible for me to review. I focused on two areas:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;AI x-risk advocacy&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Making transformative AI go well for animals&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A summary of my reasoning on &lt;a href=&quot;#prioritization&quot;&gt;prioritization&lt;/a&gt; regarding AI misalignment risk:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;I focused on AI policy advocacy over technical safety research, primarily because it’s much more neglected, and there are other people with more expertise than me who already look for neglected research ideas. [&lt;a href=&quot;#why-not-technical-safety-research&quot;&gt;More&lt;/a&gt;]&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I focused on policy &lt;em&gt;advocacy&lt;/em&gt; over policy &lt;em&gt;research&lt;/em&gt;, again because it’s particularly neglected. [&lt;a href=&quot;#why-not-ai-policy-research&quot;&gt;More&lt;/a&gt;]&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I considered the downsides of advocacy, which ultimately I don’t believe are strong enough to outweigh the upsides. [&lt;a href=&quot;#downsides-of-ai-policyadvocacy-and-why-theyre-not-too-big&quot;&gt;More&lt;/a&gt;]&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I decided not to spend much time evaluating which policies are best to advocate for, because a wide variety of policies could be helpful, and we need more advocacy in general. [&lt;a href=&quot;#what-kinds-of-policies-might-reduce-ai-x-risk&quot;&gt;More&lt;/a&gt;]&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Regarding AI issues beyond misalignment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Transformative AI may not go well for animals. There are some tractable interventions for improving post-TAI animal welfare. [&lt;a href=&quot;#maybe-prioritizing-post-tai-animal-welfare&quot;&gt;More&lt;/a&gt;]&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;There are other important issues like digital sentience, AI-enabled coups, moral error, etc. But there is no visible path to solving these problems before transformative AI; and I find it quite difficult to weigh the importance of these issues, so I did not discuss them more than briefly. [&lt;a href=&quot;#why-not-prioritize-digital-minds--s-risks--moral-error--better-futures--ai-misuse-x-risk--gradual-disempowerment&quot;&gt;More&lt;/a&gt;]&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I created a list of projects within my two focus areas and identified four &lt;a href=&quot;#top-project-ideas&quot;&gt;top project ideas&lt;/a&gt; (presented in no particular order):&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#talk-to-policy-makers-about-ai-x-risk&quot;&gt;Talk to policy-makers about AI x-risk&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#write-ai-x-risk-legislation&quot;&gt;Write AI x-risk legislation&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#advocate-to-change-ai-training-to-make-llms-more-animal-friendly&quot;&gt;Advocate to change AI training to make LLMs more animal-friendly&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#develop-new-plans--evaluate-existing-plans-to-improve-post-tai-animal-welfare&quot;&gt;Develop new plans / evaluate existing plans to improve post-TAI animal welfare&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I also included &lt;a href=&quot;#honorable-mentions&quot;&gt;honorable mentions&lt;/a&gt; and a longer &lt;a href=&quot;#list-of-other-project-ideas&quot;&gt;list of other project ideas&lt;/a&gt;. For each idea, I provide a theory of change, list which orgs are already working on it (if any), and give some pros and cons.&lt;/p&gt;

&lt;p&gt;Finally, I list a few areas for &lt;a href=&quot;#future-work&quot;&gt;future work&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There are two external supplements on Google Docs:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/&quot;&gt;Appendix&lt;/a&gt;: Some miscellaneous topics that were relevant, but not quite relevant enough to include in the main text.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;https://docs.google.com/document/d/1vWB5CgH69W4lmpZrCXaD3n2Jqz32kVnvCJwUA2RE8Fw/&quot;&gt;List of relevant organizations&lt;/a&gt;: A reference list of orgs doing work in AI-for-animals or AI policy/advocacy, with brief descriptions of their activities.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h1 id=&quot;prelude&quot;&gt;Prelude&lt;/h1&gt;

&lt;p&gt;I was commissioned by Rethink Priorities to do a broad review of the AI safety/governance landscape and find some neglected interventions. Instead of doing that, I reviewed the landscape of just two areas within AI safety:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;AI x-risk advocacy&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Making transformative AI go well for animals&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I narrowed my focus in the interest of time, and because I believed I had the best chance of identifying promising interventions within those two fields.&lt;/p&gt;

&lt;p&gt;There is a tradeoff between (a) giving recommendations that are easy to agree with, but weak; (b) giving strong recommendations that only make sense if you hold certain idiosyncratic beliefs. This report leans more toward (b), making some strong assumptions and building recommendations off of those, although I tried to avoid making assumptions whenever I could do so without weakening the conclusions. I also tried to be clear about what assumptions I’m making.&lt;/p&gt;

&lt;p&gt;This report is broad, but I only spent three months writing it. There are some topics in this report that could have been a PhD dissertation, but instead, I spent an hour on them.&lt;/p&gt;

&lt;p&gt;Most of this report is about AI policy, but I don’t have a background in policy. I did speak to a number of people who work in policy, and I read a lot of published materials, but I lack personal experience, and I expect that there are important things happening in AI policy that I don’t know about.&lt;/p&gt;

&lt;h2 id=&quot;some-positions-im-going-to-take-as-given&quot;&gt;Some positions I’m going to take as given&lt;/h2&gt;

&lt;p&gt;The following premises would probably be controversial with some audiences, but I expect them to be uncontroversial for the readers of this report, so I will treat them as background assumptions.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Effective altruist principles are correct (e.g. cost-effectiveness matters; you can, in principle, quantify the expected value of an intervention).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Animal welfare matters.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Digital minds can matter.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;AI misalignment is a serious problem that could cause human extinction.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;definitions&quot;&gt;Definitions&lt;/h2&gt;

&lt;p&gt;The terms AGI/ASI/TAI can often be used interchangeably, but in some cases the distinctions matter:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;AGI = human-level AI: Capable enough to match the economic output of a large percentage of humans (say, at least half).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;ASI = superintelligent AI: Smart enough to vastly outperform humans on every task.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;TAI = transformative AI: Smart enough to radically transform society (without making a claim about whether that happens at AGI-level or ASI-level or in between).&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I use the terms “legislation” and “regulation” largely interchangeably. For my purposes, I don’t need to draw a distinction between government mandates that are directly written into law vs. decreed by a regulatory body.&lt;/p&gt;

&lt;h1 id=&quot;prioritization&quot;&gt;Prioritization&lt;/h1&gt;

&lt;p&gt;For this report, I focused on AI risk advocacy and on post-TAI animal welfare, and I did not spend much time on other AI-related issues.&lt;/p&gt;

&lt;p&gt;For the sake of time-efficiency, rather than creating a big list of ideas in the full AI space, I first narrowed down to the regions within the AI space that I thought were most promising and then came up with a list of ideas in those regions. I could have spent time investigating (say) alignment research, but I doubt I would have ended up recommending any alignment research project ideas.&lt;/p&gt;

&lt;p&gt;In this section:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#why-not-technical-safety-research&quot;&gt;Why not technical safety research?&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#why-not-ai-policy-research&quot;&gt;Why not policy research?&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#downsides-of-ai-policyadvocacy-and-why-theyre-not-too-big&quot;&gt;Downsides of AI policy/advocacy (and why they’re not too big)&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#what-kinds-of-policies-might-reduce-ai-x-risk&quot;&gt;What kinds of policies would be good?&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#maybe-prioritizing-post-tai-animal-welfare&quot;&gt;Maybe prioritizing post-TAI animal welfare&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;#why-not-prioritize-digital-minds--s-risks--moral-error--better-futures--ai-misuse-x-risk--gradual-disempowerment&quot;&gt;Why not prioritize digital minds / S-risks / moral error / AI misuse x-risk / gradual disempowerment?&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;why-not-technical-safety-research&quot;&gt;Why not technical safety research?&lt;/h2&gt;

&lt;p&gt;Technical safety research (mainly alignment research, but also including control, interpretability, monitoring, etc.) is considerably better-funded than AI safety policy.&lt;/p&gt;

&lt;p&gt;AI companies invest a significant amount into safety research. They also invest in policy, but their investments are mostly counterproductive (they are mostly advocating &lt;em&gt;against&lt;/em&gt; safety regulations&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;). Philanthropic funders invest a lot into technical research, and less into policy.&lt;/p&gt;

&lt;p&gt;I have not made a serious attempt to estimate the volume of work going into research vs. policy/advocacy, but my sense is that the former receives much more funding.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Some sub-fields within technical research may be underfunded. But:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;I am not in a great position to figure out what those are.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;There are already many grantmakers who seek out neglected technical research directions. There are recent requests for proposals (RFPs) from &lt;a href=&quot;https://cifar.ca/cifarnews/2025/08/05/calls-open-for-global-ai-alignment-research-initiative/&quot;&gt;UK AI Security Institute&lt;/a&gt;, &lt;a href=&quot;https://www.openphilanthropy.org/request-for-proposals-technical-ai-safety-research&quot;&gt;Open Philanthropy&lt;/a&gt;, &lt;a href=&quot;https://futureoflife.org/our-work/grantmaking-work/&quot;&gt;Future of Life Institute&lt;/a&gt;, and &lt;a href=&quot;https://www.frontiermodelforum.org/ai-safety-fund&quot;&gt;Frontier Model Forum’s AI Safety Fund&lt;/a&gt;, among others.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;why-not-ai-policy-research&quot;&gt;Why not AI policy research?&lt;/h2&gt;

&lt;p&gt;Is it better to do policy &lt;em&gt;research&lt;/em&gt; (figure out what policies are good) or policy &lt;em&gt;advocacy&lt;/em&gt; (try to get policies implemented)?&lt;/p&gt;

&lt;p&gt;Both are necessary, but this article focuses on policy advocacy for the following reasons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;There is much more money in AI policy research. By a large margin, most of what’s happening in AI safety policy could be described as “research”.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;Recently, Jason Green-Lowe &lt;a href=&quot;https://www.lesswrong.com/posts/BjeesS4cosB2f4PAj/we-re-not-advertising-enough-post-3-of-6-on-ai-governance&quot;&gt;estimated&lt;/a&gt; from LinkedIn data that there are 3x as many governance researchers as governance advocates.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Policy research is a necessary step in the funnel. We also need people writing legislation and people advocating for the legislation, both of which we have very little of.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;At this point, we have at least &lt;em&gt;some&lt;/em&gt; idea of how to implement AI safety regulations. More research would be valuable, but it likely has diminishing returns.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;I wrote more about this last year in &lt;a href=&quot;https://mdickens.me/2024/11/18/where_i_am_donating_in_2024/#slow-nuanced-regulation-vs-fast-coarse-regulation&quot;&gt;Slow nuanced regulation vs. fast coarse regulation&lt;/a&gt;.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Research works best with long timelines. Timelines are probably not long.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;The ideal situation is to spend 10–20 years developing a field of AI policy, write many reports until a consensus slowly develops about how to govern AI development, then advocate for the consensus policies. But it is likely that by the time we have a reasonable consensus, it’s already too late to do anything about TAI.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Even if you think timelines are probably long, we are currently under-investing in activities that pay off given short timelines.&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;downsides-of-ai-policyadvocacy-and-why-theyre-not-too-big&quot;&gt;Downsides of AI policy/advocacy (and why they’re not too big)&lt;/h2&gt;

&lt;p&gt;Basically, policy work does one or both of these things:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Legally enforce AI safety measures&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Slow down AI development&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;People who care about x-risk broadly agree that AI safety regulations can be good, although there’s some disagreement about how to write good regulations.&lt;/p&gt;

&lt;p&gt;The biggest objection to regulation is that it (often) causes AI development to slow down. People usually don’t object to easy-to-satisfy regulations; they object to regulations that will impede progress.&lt;/p&gt;

&lt;p&gt;So, is slowing down AI worth the cost?&lt;/p&gt;

&lt;p&gt;I am aware of two good arguments against slowing down AI (or imposing regulations that de facto slow down AI):&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Opportunity cost – we need AI to bring technological advances (e.g., medical advances to reduce mortality and health risks)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;AI could prevent non-AI-related x-risks&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Two additional arguments against AI policy advocacy:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Meaningful AI regulations are not politically feasible to implement&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Advocacy can backfire&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An additional argument that applies to some types of advocacy but not others:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Advocacy may slow down safer actors without slowing down more reckless actors. For example, it is sometimes argued that US regulations are bad if they allow Chinese developers to gain the lead. I believe this outcome is avoidable—and it’s a good reason to prefer global cooperation over national or local regulations. But I can’t address this argument concisely, so I will just acknowledge it without further discussion.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(For a longer list of arguments, with responses that are probably better-written than mine, see Katja Grace’s &lt;a href=&quot;https://aiimpacts.org/lets-think-about-slowing-down-ai/&quot;&gt;Let’s think about slowing down AI&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;Regarding the &lt;strong&gt;opportunity cost argument&lt;/strong&gt;, it makes sense if you think AI does not pose a meaningful existential risk or if you heavily discount future generations. If there is a significant probability that future generations are ~equally valuable to current generations, then the opportunity cost argument does not work. The opportunity cost of delaying AI by (say) a few decades is easily dwarfed by the risk of extinction.&lt;/p&gt;

&lt;p&gt;As to the &lt;strong&gt;non-AI x-risk argument&lt;/strong&gt;, it is broadly (although not universally) accepted among people in the x-risk space that AI x-risk is 1–2 orders of magnitude higher than total x-risk from other sources (see Michael Aird’s &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit&quot;&gt;database of x-risk estimates&lt;/a&gt; or the &lt;a href=&quot;https://forecastingresearch.org/xpt&quot;&gt;Existential Risk Persuasion Tournament&lt;/a&gt;, although I don’t put much weight on individual forecasts). Therefore, delaying AI development seems preferable as long as it buys us a meaningful reduction in AI risk.&lt;/p&gt;

&lt;p&gt;See &lt;a href=&quot;#quantitative-model-on-ai-x-risk-vs-other-x-risks&quot;&gt;Quantitative model on AI x-risk vs. other x-risks&lt;/a&gt; under Future Work.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;tractability argument&lt;/strong&gt; seems more concerning. Preventing AI extinction via technical research and preventing it via policy both seem unlikely to work. But I am more optimistic about policy because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;We have a good enough understanding of AI alignment to say with decent confidence that we’re nowhere close to solving it. It’s less clear what it would take to get good regulations put in place.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;AI regulations are unpopular both in the US Congress and within the Trump administration, but popular among the general public. Popular support increases the feasibility of getting regulations passed.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;And there is a good chance that Congress will be more regulation-friendly after the 2026 Congressional elections.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;SB-1047 nearly got passed into law, making it through the California legislature and only failing due to veto. A near-win suggests that a win isn’t far away in possibility-space.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(My reasoning on tractability focused on US policy because that’s where most of the top AI companies operate. The UK seems to be the current leader on AI policy, although it’s not clear to what extent UK regulations matter for x-risk.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;That leaves the &lt;strong&gt;backfire argument&lt;/strong&gt;. This is a real concern, but ultimately it’s a risk you have to take at some point because you can’t get policies passed if you don’t advocate for them. It could make sense to delay advocacy if one has good reason to believe that future advocacy is less likely to backfire; to my knowledge this is not a common position, and it’s more common for people to oppose advocacy unconditionally. For more on this topic, see Appendix: &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.idfhvmca2skk&quot;&gt;When is the right time for advocacy?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also: I’m somewhat less concerned about this than many people. When I did &lt;a href=&quot;https://mdickens.me/2025/04/18/protest_outcomes_critical_review/&quot;&gt;research on protest outcomes&lt;/a&gt;, I found that peaceful protests increased public support, even though many people intuitively expect the opposite. Protests aren’t the only type of advocacy, but it’s a particularly controversial type of advocacy. If protests don’t backfire, then it stands to reason—although I have no direct evidence—that other, tamer forms of advocacy are unlikely to backfire.&lt;/p&gt;

&lt;p&gt;(There is some evidence that &lt;em&gt;violent&lt;/em&gt; protests backfire, however.)&lt;/p&gt;

&lt;p&gt;If policy-maker advocacy is similar to public advocacy, then probably the competence bar is not as high as many people think it is. Perhaps policy-makers are more discerning/critical than the general public; on the other hand, it’s specifically their job to do what their constituents want, so it stands to reason that it’s a good idea to tell them what you want.&lt;/p&gt;

&lt;p&gt;My main concern comes from deference: some people whom I respect believe that advocacy backfires by default. I don’t understand why they believe that, so I may be missing something important.&lt;/p&gt;

&lt;p&gt;I do believe that much AI risk advocacy has backfired in the past, but I believe this was fairly predictable and avoidable. Specifically, talking about the importance of AI has historically encouraged people to build it, which increased x-risk. People should not advocate for AI being a big deal; they should advocate for AI being &lt;em&gt;risky&lt;/em&gt;. (Which it is.) See &lt;a href=&quot;#advocacy-should-emphasize-x-risk-and-misalignment-risk&quot;&gt;Advocacy should emphasize x-risk and misalignment risk&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;what-kinds-of-policies-might-reduce-ai-x-risk&quot;&gt;What kinds of policies might reduce AI x-risk?&lt;/h2&gt;

&lt;p&gt;There are many policies that could help. And many policy ideas are independent: we could have safety testing requirements AND frontier-model training restrictions AND on-chip monitoring AND export controls. Those could all be part of the same bill or separate bills.&lt;/p&gt;

&lt;p&gt;It’s beyond the scope of this report to come up with specific policy recommendations. I do, however, have some things I would like to see in policy proposals:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;They should be relevant to existential risk, especially misalignment risk.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I would like to see work on policies that would help in the event of a global moratorium on frontier AI development (e.g. we’d need ways to enforce the moratorium).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;For an ideal policy proposal, it is possible to draw a causal arrow from “this regulation gets passed” to “we survive”. Policies don’t &lt;em&gt;have&lt;/em&gt; to singlehandedly prevent extinction, but given that we may only have a few years before AGI, I believe we should be seriously trying to draft bills that are sufficient on their own to avert extinction.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(The closest thing I’ve seen to #3 is Barnett &amp;amp; Scher’s &lt;a href=&quot;https://arxiv.org/abs/2505.04592&quot;&gt;AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions&lt;/a&gt;. It does not propose a set of policies that would (plausibly) prevent extinction, but it does propose a list of research questions that (may) need to be answered to get us there.)&lt;/p&gt;

&lt;p&gt;Some people are concerned about passing suboptimal legislation. I’m not overly concerned about this because the law changes all the time. If you pass some legislation that turns out to be less useful than expected, you can pass more legislation. For example, the first environmental protections were weak, and later regulations strengthened them.&lt;/p&gt;

&lt;p&gt;Regulations could create momentum, or they could create “regulation fatigue”. I did a brief literature review of historical examples, and my impression was that weak regulation begets strong regulation more often than not, but there are examples in both directions. See &lt;a href=&quot;https://mdickens.me/reading-notes/#[2025-07-02%20Wed]%20Deep%20Research:%20Foot-in-the-Door%20Regulations&quot;&gt;my reading notes&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;some-ai-policy-ideas-i-like&quot;&gt;Some AI policy ideas I like&lt;/h3&gt;

&lt;p&gt;I believe that a moratorium on frontier AI development is the best outcome for preventing x-risk (see &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0&quot;&gt;Appendix&lt;/a&gt; for an explanation of why I believe that). None of my top project ideas depend on this belief, although it would inform the details of how I’d like to see some of those project ideas implemented.&lt;/p&gt;

&lt;p&gt;I didn’t specifically do research on policy ideas while writing this report, but I did incidentally come across a few ideas that I’d like to see get more attention. Since I didn’t put meaningful thought into them, I will simply list them here.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Operationalization of “pause frontier AI development until we can make it safe.” For example, what infrastructure and operations are required to enforce a pause?&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Rules about when to enforce a pause on frontier AI development: something like “When warning sign X occurs, companies are required to stop training bigger AI systems until they implement mitigations Y/Z.”&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Ban recursively self-improving AI.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;Recursive self-improvement is the main way that AI capabilities could rapidly grow out of control, but banning it does not impede progress in the way that most people care about.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Some work needs to be done to operationalize this, but we shouldn’t let the perfect be the enemy of the good.&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Require companies to publish binding safety policies (e.g. responsible scaling policies [&lt;a href=&quot;https://metr.org/blog/2023-09-26-rsp/&quot;&gt;RSPs&lt;/a&gt;] or similar). That is, if a company’s policy says it will do something, then that constitutes a legally binding promise.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;This would prevent the situation we have seen in the past, where, when a company fails to live up to a particular self-imposed requirement, it simply edits its safety policy to remove that requirement.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;This sort of regulation isn’t strong enough to prevent extinction, but it has the advantage that it should be easy to advocate for.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;California bill SB 53 says something like this (see &lt;a href=&quot;https://www.sb53.info/&quot;&gt;sb53.info&lt;/a&gt; for a summary), but its rules would not fully come into effect until 2030.&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;maybe-prioritizing-post-tai-animal-welfare&quot;&gt;Maybe prioritizing post-TAI animal welfare&lt;/h2&gt;

&lt;p&gt;Making TAI go well for animals is probably less important than x-risk because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Almost everyone cares about animals. An AI that’s aligned to human values would also care about animals, and it would probably figure out ways to prevent large sources of animal suffering like factory farming.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;A technologically advanced civilization could develop cheaper alternatives to animal farming (e.g. cultured meat), rendering factory farming unnecessary.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;There wouldn’t be much benefit in spreading wild animal suffering, so it stands to reason that post-TAI civilization won’t do it. (Although I’m not at all confident about this.)&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, AI-for-animals could still be highly cost-effective.&lt;/p&gt;

&lt;p&gt;An extremely basic case for cost-effectiveness:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;There’s a (say) 80% chance that an aligned(-to-humans) AI will be good for animals, but that still leaves a 20% chance of a bad outcome.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;AI-for-animals receives much less than 20% as much funding as AI safety.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Cost-effectiveness maybe scales with the inverse of the amount invested. Therefore, AI-for-animals interventions are more cost-effective on the margin than AI safety.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;why-not-prioritize-digital-minds--s-risks--moral-error--better-futures--ai-misuse-x-risk--gradual-disempowerment&quot;&gt;Why not prioritize digital minds / S-risks / moral error / better futures / AI misuse x-risk / gradual disempowerment?&lt;/h2&gt;

&lt;p&gt;There are some topics on how to make the future go well that aren’t specifically about AI alignment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Ensuring digital minds have good welfare&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Preventing S-risks — risks of astronomical suffering&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Moral error — the risk that we make a big mistake because we are wrong about what’s morally right&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Better futures — ensuring that the future is as good as possible, as opposed to simply preventing bad outcomes&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Preventing powerful AI from being misused to cause existential harm&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Preventing powerful AI from gradually disempowering sentient beings and slowly leading to a bad outcome, as opposed to a sudden bad outcome like extinction&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Call these “non-alignment risks”.&lt;/p&gt;

&lt;p&gt;Originally, I included each of these as separate project ideas, but I decided not to focus on any of them. This decision deserves much more attention than I gave it, but I will briefly explain why I did not spend much time on non-alignment risks.&lt;/p&gt;

&lt;p&gt;All of these cause areas are extremely important and neglected (more neglected than AI misalignment risk), and (for the most part) very different from each other. And I am happy for the people who are working on them—there are some enormous issues in this space where only one person in the world is working on it. Nonetheless, I did not prioritize them.&lt;/p&gt;

&lt;p&gt;My concern is that, if AI timelines are short, then there is virtually no chance that we can solve these problems before TAI arrives.&lt;/p&gt;

&lt;p&gt;There is a dilemma:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;If TAI can help us solve these problems, then there isn’t much benefit in working on them now.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;If we can’t rely on TAI to help solve them (e.g. we expect value lock-in), then we have little hope of solving them in time.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(There is a way out of this dilemma: perhaps AI timelines are long enough that these problems are tractable, but short enough that we need to start working on them now—we can’t wait until it becomes apparent that timelines are long. That seems unlikely because it’s rather specific, but I didn’t give much thought to this possibility.)&lt;/p&gt;

&lt;p&gt;It looks like we have only two reasonable options for handling AI welfare / S-risks / moral error / etc.:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Increase the probability that we end up in world #1, where TAI can help us solve these problems—for example, by increasing the probability that something like a &lt;a href=&quot;https://forum.effectivealtruism.org/topics/long-reflection&quot;&gt;Long Reflection&lt;/a&gt; happens.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Slow down AI development.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I lean toward the second option. For more reasoning on this, see Appendix: &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.o881tulnpfpa&quot;&gt;Slowing down is a general-purpose solution to every non-alignment problem&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I’m quite uncertain about the decision not to focus on these cause areas. They are arguably as important as AI alignment, and much more neglected.&lt;/p&gt;

&lt;p&gt;AI-for-animal-welfare could also be included on my list of non-alignment risks, but I &lt;em&gt;did&lt;/em&gt; prioritize it because I can see some potentially tractable interventions in the space.&lt;/p&gt;

&lt;p&gt;AI welfare seems more tractable than animal welfare in that AI companies care more about it, but it seems &lt;em&gt;less&lt;/em&gt; tractable because it involves extremely difficult problems like “when are digital minds conscious?” There may be some tractable, short-timelines-compatible ideas out there, but I did not see any in the research agendas I read.&lt;/p&gt;

&lt;p&gt;Perhaps I could identify tractable interventions by digging deeper into the space and maybe doing some original research, but that was out of scope for this article.&lt;/p&gt;

&lt;h3 id=&quot;whos-working-on-them&quot;&gt;Who’s working on them?&lt;/h3&gt;

&lt;p&gt;In each of my project idea sections, I included a list of orgs working on that idea (if any). I didn’t write individual project ideas for non-alignment risks (other than AI-for-animals), but I still wanted to include lists of relevant orgs, so I’ve put them below.&lt;/p&gt;

&lt;p&gt;There are also some individual researchers who have published articles on these topics in the past; I will not include those.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;AI welfare / digital minds: &lt;a href=&quot;https://www.anthropic.com/research/exploring-model-welfare&quot;&gt;Anthropic&lt;/a&gt;; &lt;a href=&quot;https://longtermrisk.org/&quot;&gt;Center on Long-Term Risk&lt;/a&gt;; &lt;a href=&quot;https://www.longview.org/digital-sentience-consortium/&quot;&gt;Digital Sentience Consortium&lt;/a&gt;; &lt;a href=&quot;https://eleosai.org/&quot;&gt;Eleos AI&lt;/a&gt;; &lt;a href=&quot;https://sites.google.com/nyu.edu/mindethicspolicy/home&quot;&gt;NYU Center for Mind, Ethics, and Policy&lt;/a&gt; &lt;a href=&quot;https://www.sentientfutures.ai/&quot;&gt;Sentient Futures&lt;/a&gt;; &lt;a href=&quot;https://www.sentienceinstitute.org/&quot;&gt;Sentience Institute&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;AI misuse x-risks: &lt;a href=&quot;https://www.forethought.org/&quot;&gt;Forethought&lt;/a&gt;; probably a number of others, but I didn’t spend time specifically looking for them. (AI misuse is a relatively popular subject matter, but extinction-level misuse isn’t much discussed.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Better futures: &lt;a href=&quot;https://www.forethought.org/&quot;&gt;Forethought&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Gradual disempowerment: To my knowledge, no orgs specifically work on this, but there is the &lt;a href=&quot;https://gradual-disempowerment.ai/&quot;&gt;Gradual Disempowerment&lt;/a&gt; paper written by authors with various affiliations.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Moral error: &lt;a href=&quot;https://longtermrisk.org/&quot;&gt;Center on Long-Term Risk&lt;/a&gt;; &lt;a href=&quot;https://www.forethought.org/&quot;&gt;Forethought&lt;/a&gt;; &lt;a href=&quot;https://globalprioritiesinstitute.org/&quot;&gt;Global Priorities Institute&lt;/a&gt; (now defunct as of just before this writing).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;S-risks from cooperation failure: &lt;a href=&quot;https://longtermrisk.org/&quot;&gt;Center on Long-Term Risk&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;some-relevant-research-agendas&quot;&gt;Some relevant research agendas&lt;/h3&gt;

&lt;p&gt;Although I decided not to prioritize this space, others have done work on preparing research agendas, which readers may be interested in. Here, I include a list of research agendas (or problem overviews, which can inform research agendas) with no added commentary.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Anthony DiGiovanni – &lt;a href=&quot;https://forum.effectivealtruism.org/posts/hhyjbjwN96NWRSvv7/clarifying-wisdom-foundational-topics-for-aligned-ais-to&quot;&gt;Clarifying “wisdom”: Foundational topics for aligned AIs to prioritize before irreversible decisions&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Center on Long-Term Risk – &lt;a href=&quot;https://longtermrisk.org/research-agenda&quot;&gt;Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Chi Nguyen – &lt;a href=&quot;https://forum.effectivealtruism.org/posts/wE7KPnjZHBjxLKNno/ai-things-that-are-perhaps-as-important-as-human-controlled&quot;&gt;AI things that are perhaps as important as human-controlled AI&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Digital Sentience Consortium – &lt;a href=&quot;https://www.longview.org/digital-sentience-consortium/request-for-proposals-applied-work-on-potential-digital-sentience-and-society/&quot;&gt;Applied work on digital sentience and society&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Eleos AI – &lt;a href=&quot;https://eleosai.org/post/research-priorities-for-ai-welfare/&quot;&gt;Research priorities for AI welfare&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Forethought – &lt;a href=&quot;https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power&quot;&gt;AI-Enabled Coups: How a Small Group Could Use AI to Seize Power&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Kevin Xia – &lt;a href=&quot;https://forum.effectivealtruism.org/posts/BXxEyZNYn7Fqkcsed/transformative-ai-and-animals-animal-advocacy-under-a-post&quot;&gt;Transformative AI and Animals: Animal Advocacy Under A Post-Work Society&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Raymond Douglas – &lt;a href=&quot;https://www.lesswrong.com/posts/GAv4DRGyDHe2orvwB/gradual-disempowerment-concrete-research-projects&quot;&gt;Gradual Disempowerment: Concrete Research Projects&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Will MacAskill – &lt;a href=&quot;https://www.forethought.org/research/better-futures&quot;&gt;Better Futures&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Will MacAskill – &lt;a href=&quot;https://forum.effectivealtruism.org/posts/HqmQMmKgX7nfSLaNX/moral-error-as-an-existential-risk&quot;&gt;Moral error as an existential risk&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;general-recommendations&quot;&gt;General recommendations&lt;/h1&gt;

&lt;h2 id=&quot;advocacy-should-emphasize-x-risk-and-misalignment-risk&quot;&gt;Advocacy should emphasize x-risk and misalignment risk&lt;/h2&gt;

&lt;p&gt;I would like to make two assertions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;AI x-risk is more important than non-existential AI risks.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Advocates should say that.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Given &lt;a href=&quot;https://forum.effectivealtruism.org/topics/longtermism&quot;&gt;weak longtermism&lt;/a&gt;, or even significant credence to weak longtermism on a moral-uncertainty system, x-risks dwarf non-existential AI risks in importance (except perhaps for S-risks, which are a whole can of worms that I won’t get into in this section). See Bostrom’s &lt;a href=&quot;https://existential-risk.com/concept&quot;&gt;Existential Risk Prevention As Global Priority&lt;/a&gt;. Risks like “AI causes widespread unemployment” are bad, but given the fact that we have to triage, extinction risks should take priority over them.&lt;/p&gt;

&lt;p&gt;(To my knowledge, people advocating for focusing on non-existential AI risks have never provided supporting cost-effectiveness estimates. I don’t think such an estimate would give a favorable result. If you strongly discount AI x-risk/longtermism, then most likely you should be focusing on farm animal welfare (or similar), not AI risk.)&lt;/p&gt;

&lt;p&gt;Historically, raising concerns about ASI has caused people to take harmful actions like:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;I need to be the one who builds ASI before anyone else, I think I’ll start a new frontier AI company.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;AI is a big deal, so we need to race China.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I don’t have a straightforward solution to this. You can’t reduce x-risk by doing nothing, but if you do something, there’s a risk that it backfires.&lt;/p&gt;

&lt;p&gt;My best answer is that advocacy should emphasize misalignment risk and extinction risk. Many harmful actions were committed with the premise “TAI is dangerous if someone else builds it, but safe if I build it.” When in fact it is dangerous, no matter who builds it. “If anyone builds it, everyone dies” is more the correct sort of message.&lt;/p&gt;

&lt;p&gt;Misalignment isn’t the &lt;em&gt;only&lt;/em&gt; way AI could cause extinction, although it does seem to be the most likely way. I believe advocacy should focus on misalignment risk not only because it’s the most concerning risk, but also it has historically been under-emphasized in favor of other risks (if you read Congressional testimonies by AI risk orgs, they mention other risks but rarely mention misalignment risk), and it is (in my estimation) less likely to backfire.&lt;/p&gt;

&lt;p&gt;Many advocates are concerned that x-risk and misalignment risk sound too “out there”. Two reasons why I believe advocates should talk about them:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;All else equal, it’s better to say what you believe and ask for what you want. It’s too easy to come up with galaxy-brained reasons why you will get what you want by &lt;em&gt;not&lt;/em&gt; talking about what you want.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Emphasizing the less important risks is more likely to backfire by increasing x-risk.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;There is precedent for talking about x-risk without being seen as too weird, for example, the CAIS &lt;a href=&quot;https://safe.ai/work/statement-on-ai-risk&quot;&gt;Statement on AI Risk&lt;/a&gt;. If you’re worried about x-risk, you’re in good company.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Nate Soares says more about this in &lt;a href=&quot;https://www.lesswrong.com/posts/CYTwRZtrhHuYf7QYu/a-case-for-courage-when-speaking-of-ai-danger&quot;&gt;A case for courage, when speaking of AI danger&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;prioritize-work-that-pays-off-if-timelines-are-short&quot;&gt;Prioritize work that pays off if timelines are short&lt;/h2&gt;

&lt;p&gt;There is a strong possibility (25–75% chance) of transformative AI within 5 years. &lt;a href=&quot;https://80000hours.org/agi/guide/when-will-agi-arrive/&quot;&gt;80,000 Hours&lt;/a&gt; reviews forecasts and predicts AGI by 2030; &lt;a href=&quot;https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/&quot;&gt;Metaculus&lt;/a&gt; predicts 50% chance of AGI by 2032; AI company CEOs have predicted 2025–2035 (see Appendix, &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.hmipe7qn43ip&quot;&gt;When do AI company CEOs expect advanced AI to arrive?&lt;/a&gt;); &lt;a href=&quot;https://ai-2027.com/&quot;&gt;AI 2027 team&lt;/a&gt; predicts 2030ish (their scenario has AGI arriving in 2028, but that’s their modal prediction, not median).&lt;/p&gt;

&lt;p&gt;The large majority of today’s AI safety efforts work best if timelines are long (2+ decades). Short-timelines work is neglected. It would be neglected even if there were only (say) a 10% chance of short timelines, but the probability is higher than that.&lt;/p&gt;

&lt;p&gt;For example, that means there should be less academia-style long-horizon research, and more focus on activities that have a good chance of bearing fruit quickly.&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h1 id=&quot;top-project-ideas&quot;&gt;Top project ideas&lt;/h1&gt;

&lt;p&gt;After collecting a list of project ideas, I identified four that look particularly promising (at least given the limited scope of my investigation). This section presents the four ideas in no particular order.&lt;/p&gt;

&lt;h2 id=&quot;talk-to-policy-makers-about-ai-x-risk&quot;&gt;Talk to policy-makers about AI x-risk&lt;/h2&gt;

&lt;p&gt;The way to get x-risk-reducing regulations passed is to get policy-makers on board with the idea. The way to get them on board is to talk to them. Therefore, we should talk to them.&lt;/p&gt;

&lt;p&gt;Talking to them may entail advocating for specific legislative proposals, or it may just entail raising general concern for AI x-risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Increases the chance that safety legislation gets passed or regulations get put in place. Likely also increases the chance of an international treaty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.safe.ai/&quot;&gt;Center for AI Safety / CAIS Action Fund&lt;/a&gt; (US); &lt;a href=&quot;https://controlai.com/&quot;&gt;Control AI&lt;/a&gt; (UK/US); &lt;a href=&quot;https://encodeai.org/&quot;&gt;Encode AI&lt;/a&gt; (US/global); &lt;a href=&quot;https://www.goodancestors.org.au/ai-safety&quot;&gt;Good Ancestors&lt;/a&gt; (Australia); &lt;a href=&quot;https://intelligence.org/&quot;&gt;Machine Intelligence Research Institute&lt;/a&gt; (US/global); &lt;a href=&quot;https://palisaderesearch.org/&quot;&gt;Palisade Research&lt;/a&gt; (US); &lt;a href=&quot;https://www.pauseai-us.org/&quot;&gt;PauseAI US&lt;/a&gt; (US).&lt;/p&gt;

&lt;p&gt;(That’s not as many as it sounds like because some of these orgs have one or fewer full-time-employee-equivalents talking to policy-makers.)&lt;/p&gt;

&lt;p&gt;Various other groups do political advocacy on AI risk, but mainly on sub-existential risks. The list above only includes orgs that I know have done advocacy on existential risk specifically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Political advocacy is neglected compared to policy research.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;You can advocate to the public or directly to policy-makers. Both can help, but talking to policy-makers is more “leverage-efficient” than public outreach because policy-makers have much more leverage over policy.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;According to my back-of-the-envelope calculation on policy-maker advocacy vs. public protests (see &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.7pg4uvfjb7v6&quot;&gt;Appendix&lt;/a&gt;), policy-maker advocacy looks more cost-effective (although I was writing on the back of a very small envelope, so to speak).&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Policy-maker advocacy can be bottlenecked on public support—they don’t want to support policies that their constituents dislike—but this isn’t a problem because AI safety regulation is popular among the general public.&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;More advocacy is better: bringing up AI risk repeatedly makes it more likely that policy-makers will take notice.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Poorly executed advocacy has a risk of turning off policy-makers.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;See &lt;a href=&quot;#downsides-of-ai-policyadvocacy-and-why-theyre-not-too-big&quot;&gt;Downsides of policy/advocacy (and why they’re not too big)&lt;/a&gt;, specifically the fourth downside.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Now may be too early. See Appendix: &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.idfhvmca2skk&quot;&gt;When is the right time for advocacy?&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Some comments on political advocacy:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;According to the book &lt;em&gt;Lobbying and Policy Change: Who Wins, Who Loses, and Why&lt;/em&gt; (&lt;a href=&quot;https://press.uchicago.edu/ucp/books/book/chicago/L/bo6683614.html&quot;&gt;2009&lt;/a&gt;), most factors could not predict whether lobbying efforts would succeed or fail. One of the best predictors of lobbying success was the number of employed lobbyists who previously worked as government policy-makers. This suggests that political advocacy orgs should try to hire former policy-makers/staffers.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;It is possible to hire generalist lobbyists who have political experience and will lobby for any cause. AI risk orgs could hire them.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Advocacy in the US or China is best because those are the countries with by far the most advanced AI. Ideally, we also want good policies in China, but I can’t confidently recommend advocacy there, see &lt;a href=&quot;#policyadvocacy-in-china&quot;&gt;Policy/advocacy in China&lt;/a&gt;.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;Advocacy in the UK has had the most success. UK AI policy is less directly relevant because there are no frontier AI companies based in the UK, but (1) companies still care about being able to operate in the UK and (2) UK policy can be a template for other countries or for international agreements.2&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;Advocacy in California looks promising, and all American frontier AI companies are based in California.&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;There’s a question as to what policy positions we should advocate for, but I believe there are many correct answers. See &lt;a href=&quot;#what-kinds-of-policies-might-reduce-ai-x-risk&quot;&gt;What kinds of policies might reduce AI x-risk?&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;write-ai-x-risk-legislation&quot;&gt;Write AI x-risk legislation&lt;/h2&gt;

&lt;p&gt;AI policy research is relatively well-funded, but little work has been done to convert the results of this research into fully fleshed-out bills. Writers can learn from AI policy researchers what sorts of regulation might work, and learn from advocates what regulations they want and what they expect policy-makers to support.&lt;/p&gt;

&lt;p&gt;This work also requires prioritizing which policy proposals look most promising and converting those into draft legislation. I think of that as part of the same work, but it could also be separate—for example, a team of AI safety researchers could prioritize policy proposals and sketch out legislation, and a separate team of legal experts (who don’t even necessarily need to know anything about AI) can convert those sketches into usable text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many policy-makers care about AI risk and would support legislation, but there’s a big difference between “would support legislation” and “would personally draft legislation”. To get AI legislation passed, it helps if the legislation is already written. Instead of telling policy-makers&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;AI x-risk is a big deal, please write some legislation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;it’s a much easier ask if you can say&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;AI x-risk is a big deal, here is a bill that I already wrote, would you be interested in sponsoring it?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.centeraipolicy.org/&quot;&gt;Center for AI Policy&lt;/a&gt; (now mostly defunct); &lt;a href=&quot;https://www.safe.ai/&quot;&gt;Center for AI Safety / CAIS Action Fund&lt;/a&gt;; &lt;a href=&quot;https://encodeai.org/&quot;&gt;Encode AI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Others may be working on it as well, since legislation often doesn’t get published. I spoke to someone who has been involved in writing AI risk legislation, and they said that few people are working on this, so I don’t think the full list is much longer than the names I have.&lt;/p&gt;

&lt;p&gt;(My contact also said that they wished more people were writing legislation.)&lt;/p&gt;

&lt;p&gt;See &lt;a href=&quot;https://mdickens.me/reading-notes/#[2025-06-06%20Fri]%20Deep%20Research:%20AI%20x-risk%20legislation&quot;&gt;my notes&lt;/a&gt; for a list of AI safety bills that have been introduced in the US, UK, and EU. Most of those bills were written by legislators, not by nonprofits, as far as I can tell.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;It’s common for policy-makers to sponsor bills that were written by third parties.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;A small number of writers could draft a (relatively) large volume of legislation by leaning on pre-existing research.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;There are lawyers who specialize in writing legislation. You can hire them to do the bulk of the work (you don’t need value alignment or even much skill at hiring, just find a law firm with a good reputation).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The general idea of “write AI legislation” looks good under many beliefs about AI risk. But your beliefs will impact what kinds of legislation you want.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The natural argument against writing legislation is that it’s too early and we don’t know how to regulate AI yet. (Related from the Appendix: &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.idfhvmca2skk&quot;&gt;When is the right time for advocacy?&lt;/a&gt;)&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;That’s sort of true, but if timelines are short, then we don’t have time to wait; we have to just do our best.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;And I don’t think this concern is fatal: we do have &lt;em&gt;some&lt;/em&gt; concrete ideas about how to regulate AI, so we can write legislation for those.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;And it’s still a good idea to get some regulations in place now, and then we can pass new regulations later as necessary. That’s how regulation in nascent industries often works.&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;advocate-to-change-ai-training-to-make-llms-more-animal-friendly&quot;&gt;Advocate to change AI training to make LLMs more animal-friendly&lt;/h2&gt;

&lt;p&gt;LLMs undergo post-training to make their outputs satisfy AI companies’ criteria. For example, Anthropic post-trains its models to be “helpful, honest, and harmless”. AI companies could use the same process to make LLMs give regard to animal welfare.&lt;/p&gt;

&lt;p&gt;Animal advocates could use a few strategies to make this happen, for example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Build a benchmark that measures LLMs’ friendliness toward animals and try to get AI companies to train on that benchmark.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Advocate for AI companies to include animal welfare in AI constitutions/model specs.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Advocate for AI companies to incorporate animal welfare when doing &lt;a href=&quot;https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback&quot;&gt;RLHF&lt;/a&gt;, or ask to directly participate in RLHF.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Insofar as the current alignment paradigm works at aligning AIs to human preferences, incorporating animal welfare into post-training would align LLMs to animal welfare in the same way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.compassionml.com/&quot;&gt;Compassion in Machine Learning (CaML)&lt;/a&gt;; &lt;a href=&quot;https://www.sentientfutures.ai/&quot;&gt;Sentient Futures&lt;/a&gt;. (They mainly do research to develop animal-friendliness benchmarks and other related projects, but they have also worked with AI companies.)&lt;/p&gt;

&lt;p&gt;For some useful background, see &lt;a href=&quot;https://forum.effectivealtruism.org/posts/NAnFodwQ3puxJEANS/road-to-animalharmbench-1&quot;&gt;Road to AnimalHarmBench&lt;/a&gt; by Artūrs Kaņepājs and Constance Li.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;People at AI companies have told me that getting a company to pay attention to animal welfare isn’t too difficult—in fact, one frontier company already uses an animal welfare benchmark.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;As I understand, AI companies don’t want to be seen as imposing their own values on LLMs, but they are open to tuning the values based on what external parties want.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Insofar as post-training works at preventing misalignment risk, it should also prevent suffering-risk / animal-welfare-risk.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;I don’t expect current alignment techniques to continue working on superintelligent AI, so I don’t expect them to make ASI friendly toward animals, either. But if I’m right, then we won’t get a friendly-to-humans AI that causes astronomical animal suffering; we will get a paperclip maximizer.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Even if current known techniques can’t help get AI to care about animals, this work could get a foot in the door, establishing relationships between animal advocates and AI companies and increasing the chances that the companies will pay attention to animal welfare in their future work.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;If I’m right that the current alignment paradigm won’t scale to superintelligence, then animal-friendliness (post-)training will fail because it relies on the same foundations as the current alignment paradigm.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;It might turn out to be difficult to get AI companies to implement animal welfare mitigations.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;There might be consumer backlash, which could make frontier models less friendly to animals in the long run.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;If TAI arrives soon, there may not be time for this intervention to have an effect. It could take too long to get the new post-training implemented; or it could be that current-gen models will perform well on friendliness-to-animals benchmarks, but this will not be due to true alignment, and there won’t be enough time to iterate.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Aligning current-gen AIs to human preferences might make them better at assisting with alignment research, but it seems less likely that aligning current-gen AIs to animal welfare would carry through to future generations—it’s not clear that animal-aligned AIs would be more helpful at aligning future AIs to animal welfare.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;develop-new-plans--evaluate-existing-plans-to-improve-post-tai-animal-welfare&quot;&gt;Develop new plans / evaluate existing plans to improve post-TAI animal welfare&lt;/h2&gt;

&lt;p&gt;Some people have proposed plans for making TAI go well for animals, but I have reservations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Most plans only work under long timelines (ex: “broadly influence society to care more about animals”).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Some plans focus specifically on farm animal welfare, and it seems very unlikely that factory farming will continue to exist in the long term (see &lt;a href=&quot;#using-tai-to-improve-farm-animal-welfare&quot;&gt;Using TAI to improve farm animal welfare&lt;/a&gt;).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Many plans assume a particular future in which we develop transformative AI, but the world does not radically change—for example, plans about how TAI can help animal activists be more effective. I think this future is quite unlikely, and even if it does occur, there’s no particular need to figure out what to do in advance.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(Lizka Vaintrob and Ben West in &lt;a href=&quot;https://forum.effectivealtruism.org/posts/tGdWott5GCnKYmRKb/a-shallow-review-of-what-transformative-ai-means-for-animal&quot;&gt;A shallow review of what transformative AI means for animal welfare&lt;/a&gt; raised essentially the same reservations. See their article for more detailed reasoning on this topic.)&lt;/p&gt;

&lt;p&gt;In light of these reservations, I would like to see research on post-TAI animal welfare interventions that look good (1) given short timelines and (2) without having to make strong predictions about what the future will look like for animals (e.g. without assuming that factory farming will exist).&lt;/p&gt;

&lt;p&gt;Since I have pressed the importance of short timelines, I’m not envisioning a long-term research project. I expect it would be possible to come up with useful results in 3–6 months (maybe even less). The research should be laser-focused on finding &lt;em&gt;near-term&lt;/em&gt; actions that take a few years at most, but still have a good chance of making the post-TAI future better for animals.&lt;/p&gt;

&lt;p&gt;(I identified &lt;a href=&quot;#advocate-to-change-ai-training-to-make-llms-more-animal-friendly&quot;&gt;Advocate to change AI training to make LLMs more animal-friendly&lt;/a&gt; as potentially a top intervention after about a week of research, although to be fair, that’s mostly because I talked to other people who had done more research than me.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI-for-animals interventions are underexplored. I expect that a few months of well-targeted research could turn up useful information about how to make AI go well for animals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://longtermrisk.org/&quot;&gt;Center on Long-Term Risk&lt;/a&gt;, &lt;a href=&quot;https://www.sentienceinstitute.org/&quot;&gt;Sentience Institute&lt;/a&gt;, and some individuals have written project proposals on AI-for-animals, but they almost always hinge on AI timelines being long. The closest thing I’m aware of is Max Taylor’s &lt;a href=&quot;https://forum.effectivealtruism.org/posts/2cZAzvaQefh5JxWdb/bringing-about-animal-inclusive-ai&quot;&gt;Bringing about animal-inclusive AI&lt;/a&gt;, which does include short-timelines proposals, but they are not directly actionable. For example, one idea is “representation of animals in AI decision-making”, which is an action an AI company could take, but AI companies are not the relevant actors. An actionable project would be something like “a nonprofit uses its connections at an AI company to persuade/pressure the company to include representation of animals in its AI decision-making”.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://sites.google.com/nyu.edu/mindethicspolicy/home&quot;&gt;NYU Center for Mind, Ethics, and Policy&lt;/a&gt; and &lt;a href=&quot;https://www.sentientfutures.ai/&quot;&gt;Sentient Futures&lt;/a&gt; have done similar work, but nothing exactly like this project proposal. I expect they could do a good job at identifying/prioritizing AI-for-animals interventions that fit my criteria. I would be excited to see a follow-up to Max Taylor’s &lt;a href=&quot;https://forum.effectivealtruism.org/posts/2cZAzvaQefh5JxWdb/bringing-about-animal-inclusive-ai&quot;&gt;Bringing about animal-inclusive AI&lt;/a&gt; focused on converting his ideas into actionable projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;AI-for-animals seems more tractable than other post-TAI welfare causes (e.g. AI welfare or S-risks from cooperation failure). There are already proposed interventions that could work if timelines are short.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;A short research project may come up with useful ideas, or at least prioritize between pre-existing ideas.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;A research project might not come up with any really good ideas. Pre-existing research has mostly failed to come up with good ideas that work under short timelines (although to a large extent, that’s because they weren’t trying to).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I’m suspicious of “meta” work in general, and I’m suspicious of research because I personally like doing research, and I believe the value of research is usually overrated by researchers. It might be better to work directly on AI-for-animals—my current favorite “direct” project idea is  &lt;a href=&quot;#advocate-to-change-ai-training-to-make-llms-more-animal-friendly&quot;&gt;Advocate to change AI training to make LLMs more animal-friendly&lt;/a&gt; or similar.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;honorable-mentions&quot;&gt;Honorable mentions&lt;/h1&gt;

&lt;h2 id=&quot;directly-push-for-an-international-ai-treaty&quot;&gt;Directly push for an international AI treaty&lt;/h2&gt;

&lt;p&gt;The best kind of AI regulation is the kind that every country agrees to (or at least every country that has near-frontier AI technology).&lt;/p&gt;

&lt;p&gt;If we need an international treaty to ensure that nobody builds a misaligned AI, then an obvious thing to do is to talk directly to national leaders about how we need an international treaty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An internationally-agreed moratorium on advanced AI would straightforwardly prevent advanced AI from killing everyone or otherwise destroying most of the value of the future.&lt;/p&gt;

&lt;p&gt;One way to get an international treaty is to talk to governments and tell them you think they should sign an international treaty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.cigionline.org/programs/global-ai-risks-initiative/&quot;&gt;Global AI Risks Initiative&lt;/a&gt;. There are other orgs that are doing work with the ultimate goal of an international treaty, but to my knowledge, they’re not &lt;em&gt;directly&lt;/em&gt; pushing for a treaty. Those other orgs include: &lt;a href=&quot;https://futureoflife.org/&quot;&gt;Future of Life Institute&lt;/a&gt;; &lt;a href=&quot;https://intelligence.org/&quot;&gt;Machine Intelligence Research Institute&lt;/a&gt;; &lt;a href=&quot;https://pauseai.info/&quot;&gt;PauseAI Global&lt;/a&gt;; &lt;a href=&quot;https://www.pauseai-us.org/&quot;&gt;PauseAI US&lt;/a&gt;; and &lt;a href=&quot;https://saif.org/&quot;&gt;Safe AI Forum&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;There is a short causal chain from “advocate for an international treaty” to “ASI doesn’t kill everyone”.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;It’s high-leverage—you only need to get a relatively small set of people on board.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;There is not much political will for an international treaty, especially a strong one. Public advocacy and smaller-scale political advocacy seem better for that reason, at least for now.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;This idea only works if you can figure out who will do a good job at pushing for an international treaty. I think it’s more difficult than generic public advocacy or talking to policy-makers about x-risk.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This might be one of my top ideas if I knew how to do it, and I &lt;em&gt;wish&lt;/em&gt; I could put it on my top-ideas list, but I don’t know how to do it.&lt;/p&gt;

&lt;h2 id=&quot;organize-a-voluntary-commitment-by-ai-scientists-not-to-build-advanced-ai&quot;&gt;Organize a voluntary commitment by AI scientists not to build advanced AI&lt;/h2&gt;

&lt;p&gt;I heard this idea from Toby Ord on the 80,00 Hours podcast #219. He said,&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;If AI kills us and we end up standing front of St. Peter, and he asks, “Well did you try a voluntary agreement not to build it?” And we said, “No, we thought it wouldn’t work”, that’s not a good look for us.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At the 1975 &lt;a href=&quot;https://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA#Prohibited_experiments&quot;&gt;Asilomar Conference&lt;/a&gt;, the international community of biologists voluntarily agreed not to conduct dangerous experiments on recombinant DNA. Perhaps something similar could work for TAI. The dangers of advanced AI are widely recognized among top AI researchers; it may be possible to organize an agreement not to work on powerful AI systems.&lt;/p&gt;

&lt;p&gt;(There are some details to be worked out as to exactly what sort of work qualifies as dangerous. As with my stance on &lt;a href=&quot;#what-kinds-of-policies-might-reduce-ai-x-risk&quot;&gt;what kinds of policies would be helpful&lt;/a&gt;, I believe there are many agreements we could reach that would be better than the status quo. I expect leading AI researchers can collectively work out an operationalization that’s better than nothing.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If leading AI scientists all agree not to build advanced AI, then it does not get built. The question is whether a non-binding commitment will work. There have been similar successes in the past, especially in genetics with voluntary moratoriums on human cloning and human genetic engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Toby Ord has raised this idea. According to a personal communication, he did some research on its plausibility, but he is not actively working on it.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://safe.ai/work/statement-on-ai-risk&quot;&gt;CAIS Statement on AI Risk&lt;/a&gt; and &lt;a href=&quot;https://futureoflife.org/open-letter/pause-giant-ai-experiments/&quot;&gt;FLI Pause Letter&lt;/a&gt; are related but weaker.&lt;/p&gt;

&lt;p&gt;FLI organized the 2017 &lt;a href=&quot;https://futureoflife.org/open-letter/ai-principles/&quot;&gt;Asilomar Conference on Beneficial AI&lt;/a&gt;, but to my knowledge, the goal was not to make any commitments regarding AI safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Unlike most project ideas, if this one succeeds, x-risk will immediately go down by multiple percentage points.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;A voluntary commitment may be sufficient to prevent extinction, and it may be easier to achieve than a legally-mandated moratorium or strict regulations.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Talking to politicians or pushing for regulation has the problem that you’d really rather get regulations in all countries simultaneously. Researchers are (I think) less prone to inter-country adversarialism than nations’ leaders, especially between the West and China—American and Chinese scientists collaborate often.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;You don’t need everyone to sign. If (say) Nobel Prize winners sign the agreement, it raises questions about why (say) the head of ML at OpenBrain hasn’t signed.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;In &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.idfhvmca2skk&quot;&gt;When is the right time for advocacy?&lt;/a&gt;, I argued that now is the right time. But a voluntary moratorium seems more likely than other ideas to fail if done at a suboptimal time because you need to get a ~majority on board.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;It’s not clear whether a failed attempt will decrease the probability of success for subsequent attempts. For example, there are many historical instances where a bill failed to pass, and then a very similar bill got passed later. But it’s not clear that this sort of voluntary agreement works the same way.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Voluntary commitments are easily violated. For example, biologists agreed not to do research on human cloning, but then a few rogue scientists (&lt;a href=&quot;https://en.wikipedia.org/wiki/Richard_Seed&quot;&gt;Richard Seed&lt;/a&gt;, etc.) did it anyway.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;But a voluntary agreement can create strong social pressure not to violate it.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;AI researchers have a vested interest in being able to do AI research; policy-makers do not.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;But the dangers of advanced AI are much better understood among AI researchers than among policy-makers.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;It is doubtful that AI company CEOs will agree to a moratorium.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;But if you get the majority of leading AI scientists to agree, CEOs will be left with insufficient talent to lead their research.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;peaceful-protests&quot;&gt;Peaceful protests&lt;/h2&gt;

&lt;p&gt;Organize peaceful protests to raise public concern and salience regarding AI risk. Historically, protests have asked for a pause on AI development, although that might not be the only reasonable ask.&lt;/p&gt;

&lt;p&gt;(I don’t have any other specific asks in mind. The advantage of “pause” is that it’s a simple message that fits on a picket sign.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Protests may increase public support and salience via reaching people in person or via media (news reporting, etc.). They may also provide a signal to policy-makers about what their constituents want.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://pauseai.info/&quot;&gt;PauseAI Global&lt;/a&gt;; &lt;a href=&quot;https://www.pauseai-us.org/&quot;&gt;PauseAI US&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.stopai.info/&quot;&gt;Stop AI&lt;/a&gt; organizes disruptive protests (e.g. blockading AI company offices), and the evidence is ambiguous as to whether disruptive protests work. See “When Are Social Protests Effective?” (Shuman et al. &lt;a href=&quot;https://doi.org/10.1016/j.tics.2023.10.003&quot;&gt;2024&lt;/a&gt;), although I should note that I think the authors overstate the strength of evidence for their claims—see &lt;a href=&quot;https://mdickens.me/reading-notes/#[2025-04-02%20Wed]%20When%20Are%20Social%20Protests%20Effective?%20\(2024\)&quot;&gt;my notes on the paper&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;https://mdickens.me/2025/04/18/protest_outcomes_critical_review/&quot;&gt;Natural experiments suggest&lt;/a&gt; that protests are effective at changing voter behavior and/or increasing voter turnout.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;Experiments also find that peaceful protests increase support in a lab setting—see “Social Movement Strategy (Nonviolent Versus Violent) and the Garnering of Third-Party Support: A Meta-Analysis” (Orazani et al. &lt;a href=&quot;https://doi.org/10.1002/ejsp.2722&quot;&gt;2021&lt;/a&gt;). For a summary, see &lt;a href=&quot;https://mdickens.me/reading-notes/#[2025-04-09%20Wed]%20Social%20Movement%20Strategy%20(Nonviolent%20Versus%20Violent)%20and%20the%20Garnering%20of%20Third-Party%20Support:%20A%20Meta-Analysis%20(2021)&quot;&gt;my notes on the paper&lt;/a&gt;.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Protesting is widely seen as the thing you do when you are concerned about an issue. Many people take not-protesting as a sign that you aren’t serious. It’s valuable to be able to say “yes, we are taking AI risk seriously, you can tell because we are staging protests”. Regardless of the cost-effectiveness of marginal protesters, it’s good for there to be nonzero protests happening.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Protests make it clear to policy-makers that their constituents care about an issue. This is especially important for AI because the general public is very worried about AI (see e.g. &lt;a href=&quot;https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/&quot;&gt;2025 Pew poll&lt;/a&gt;), but the issue is not high-salience (see &lt;a href=&quot;https://today.yougov.com/technology/articles/45565-ai-nuclear-weapons-world-war-humanity-poll&quot;&gt;2023 YouGov poll&lt;/a&gt;: respondents were worried about AI extinction risk, but it only ranked as the #6 most concerning x-risk). Protests increase its salience.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Protests are an effective way to get media attention. It’s common for protests with only a dozen participants to get news coverage.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I expect that a really good &lt;a href=&quot;#media-about-dangers-of-ai&quot;&gt;media project&lt;/a&gt; would be more cost-effective, but creating a good media project requires exceptional talent. Organizing a protest requires &lt;em&gt;some&lt;/em&gt; skill, but the bar isn’t particularly high. Therefore, you can support protests without having to identify top-tier talent.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Protests are highly neglected, and I expect them to continue to be neglected.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Some people are concerned that protests can backfire.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;I’m not concerned about peaceful protests backfiring. The scientific literature universally shows that peaceful protests have a positive effect, although the strength of the evidence could be better—see &lt;a href=&quot;https://mdickens.me/2025/04/18/protest_outcomes_critical_review/&quot;&gt;Do Protests Work? A Critical Review&lt;/a&gt;.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;My &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.7pg4uvfjb7v6&quot;&gt;back-of-the-envelope calculation&lt;/a&gt; suggested that talking directly to policy-makers is more cost-effective.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Now may be too early. See &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.idfhvmca2skk&quot;&gt;When is the right time for advocacy?&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;It may be bad for large funders to fund protests because it could create an appearance of &lt;a href=&quot;https://en.wikipedia.org/wiki/Astroturfing&quot;&gt;astroturfing&lt;/a&gt;.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;This is an argument against large funders funding them, but in &lt;em&gt;favor&lt;/em&gt; of individual donors supporting protests.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;media-about-dangers-of-ai&quot;&gt;Media about dangers of AI&lt;/h2&gt;

&lt;p&gt;Create media explaining why AI x-risk is a big deal and what we should do about it.&lt;/p&gt;

&lt;p&gt;Things like:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Books (ex: &lt;em&gt;If Anyone Builds It, Everyone Dies&lt;/em&gt;)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;News articles (ex: Existential Risk Observatory)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Videos (ex: Rob Miles)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Forecasts of how AI can go badly (ex: AI 2027)&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Media increase public concern, which makes policy-makers more likely to put good regulations in place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://80000hours.org/&quot;&gt;80,000 Hours&lt;/a&gt;; &lt;a href=&quot;https://ai-futures.org/&quot;&gt;AI Futures Project&lt;/a&gt;; &lt;a href=&quot;https://aisgf.us/&quot;&gt;AI Safety and Governance Fund&lt;/a&gt;; &lt;a href=&quot;https://aisafety.info/&quot;&gt;AI Safety Info&lt;/a&gt;; &lt;a href=&quot;https://www.securite-ia.fr/en&quot;&gt;Centre pour la Sécurité de l’IA (CeSIA)&lt;/a&gt;; &lt;a href=&quot;https://civai.org/&quot;&gt;CivAI&lt;/a&gt;; &lt;a href=&quot;https://www.existentialriskobservatory.org/&quot;&gt;Existential Risk Observatory&lt;/a&gt;; &lt;a href=&quot;https://intelligence.org/&quot;&gt;Machine Intelligence Research Institute&lt;/a&gt;; &lt;a href=&quot;https://www.themidasproject.com/&quot;&gt;Midas Project&lt;/a&gt;; various media projects by individual people.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;We need to get good policies in place. Media can influence policy-makers, and can influence the public, which is important because policy-makers largely want to do what their constituents want.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Polls show that the public is &lt;a href=&quot;https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/&quot;&gt;concerned&lt;/a&gt; about AI risk and even &lt;a href=&quot;https://today.yougov.com/technology/articles/45565-ai-nuclear-weapons-world-war-humanity-poll&quot;&gt;x-risk&lt;/a&gt;, but it’s not a high-priority issue. Media can make it more salient and/or raise “common knowledge” of concern about AI.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The impact of media is fat-tailed and heavily depends on quality. It’s hard to identify which media projects to fund.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Poorly done or misleading media projects could backfire.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;message-testing&quot;&gt;Message testing&lt;/h2&gt;

&lt;p&gt;Many people have strong opinions about the correct way to communicate AI safety to a non-technical audience, but people’s hypotheses have largely not been tested. A project could make a systematic attempt to compare different messages and survey listeners to assess effectiveness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We can’t ultimately get good AI safety outcomes unless we communicate the importance of the problem, and having data on message effectiveness will help with that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI Safety and Governance Fund has an ongoing project (see &lt;a href=&quot;https://manifund.org/projects/testing-and-spreading-messages-to-reduce-ai-x-risk&quot;&gt;Manifund&lt;/a&gt;) to test AI risk messages via online ads. The project is currently moving slowly due to lack of funding.&lt;/p&gt;

&lt;p&gt;Some advocacy orgs have done small-scale message testing on their own materials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;A smallish investment in empirical data could inform a large amount of messaging going forward.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Some types of experiments (using online ads or Mechanical Turk) can scale well with funding.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The best types of communication may be long, individually tailored (e.g. in one-on-one conversations with policy-makers), or otherwise difficult to test empirically.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I suspect that a person with excellent communication skills would not benefit much from seeing A/B-tested messaging because they can already intuit which wording will be best. (But it may be difficult to identify and hire those people.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;This is the sort of thing that might be best to do internally by an advocacy org that already has a reasonable idea of what kind of message it wants to send.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;host-a-website-for-discussion-of-ai-safety-and-other-important-issues&quot;&gt;Host a website for discussion of AI safety and other important issues&lt;/h2&gt;

&lt;p&gt;LessWrong and the Effective Altruism Forum are upstream of a large quantity of work (in AI safety as well as other EA cause areas). It is valuable that these websites continue to exist, and that moderators and web developers continue to work to preserve/improve the quality of discussion.&lt;/p&gt;

&lt;p&gt;Realistically, it doesn’t make sense to start a &lt;em&gt;new&lt;/em&gt; discussion forum, so this idea amounts to “fund/support LessWrong and/or the EA Forum”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On &lt;a href=&quot;https://manifund.org/projects/lightcone-infrastructure&quot;&gt;Lightcone Infrastructure’s Manifund&lt;/a&gt;, Oliver Habryka lists some concrete outcomes that are attributable to the existence of LessWrong. You could probably find similar evidence of impact for the EA Forum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.centreforeffectivealtruism.org/&quot;&gt;Centre for Effective Altruism&lt;/a&gt;; &lt;a href=&quot;https://www.lightconeinfrastructure.com/&quot;&gt;Lightcone Infrastructure&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Nearly every project on my list has benefited in some way from the existence of LessWrong or the EA Forum.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Hosting a platform for sharing research/discussion is cheaper (and therefore arguably more cost-effective) than directly conducting research.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The benefits are diffuse: high-quality discussion forums provide small-to-moderate benefits to every cause, but usually not &lt;em&gt;huge&lt;/em&gt; benefits. So it may be better to directly support your favorite intervention(s).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A toy model: Suppose that&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;There are 10 categories of AI safety work.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;LessWrong makes each of them 20% better.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The average AI safety work produces 1 utility point.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Well-directed AI policy produces 5 utility points.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then marginal work on LessWrong is worth 2 utility points, and my favorite AI policy orgs are worth 5 points.&lt;/p&gt;

&lt;h1 id=&quot;list-of-other-project-ideas&quot;&gt;List of other project ideas&lt;/h1&gt;

&lt;p&gt;A project not being a top idea doesn’t mean it’s bad. In fact, it’s likely that at least one or two of these ideas should be on my top-ideas list; I just don’t know which ones.&lt;/p&gt;

&lt;p&gt;I sourced ideas from:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;reviewing other lists of project ideas and filtering for the relevant ones;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;looking at what existing orgs are working on;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;writing down any (sufficiently broad) idea I came across over the last ~6 months;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;writing down any idea I thought of.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ideas are roughly ordered from broadest to most specific.&lt;/p&gt;

&lt;h2 id=&quot;ai-for-animals-ideas&quot;&gt;AI-for-animals ideas&lt;/h2&gt;

&lt;h3 id=&quot;neartermist-animal-advocacy&quot;&gt;Neartermist animal advocacy&lt;/h3&gt;

&lt;p&gt;There are various projects to improve current conditions for animals, particularly farm animals: cage-free campaigns, humane slaughter, vegetarian activism, etc. I will lump all these projects together for the purposes of this report.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Animal advocacy increases concern for animals, which likely has positive flow-through effects into the future, by affecting future generations or by shaping the values of the transformative AI that will control the future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Too many to list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Neartermist animal advocacy has the dual benefit of &lt;em&gt;definitely&lt;/em&gt; helping animals today, and building momentum to make future work more effective (to borrow a framing from &lt;a href=&quot;https://www.youtube.com/live/Mb7uRki3AqM&amp;amp;t=1h47m&quot;&gt;Jeff Sebo&lt;/a&gt;).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Neartermist animal advocacy is tractable and has clear feedback loops. It looks especially promising if you’re highly uncertain or clueless about longtermist or post-TAI interventions.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The benefits are diffuse. Creating one new vegan helps many animals in the short term, but has only a tiny effect on society’s future values. I expect direct attempts to improve AI alignment-to-animals to be much more cost-effective.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;I created a &lt;a href=&quot;https://squigglehub.org/models/mdickens/AI-for-animals-benchmark-vs-conventional&quot;&gt;back-of-the-envelope calculation&lt;/a&gt; that aligns with my initial expectation: my BOTEC-informed guess is that direct advocacy on AI values (by promoting a &lt;a href=&quot;#advocate-to-change-ai-training-to-make-llms-more-animal-friendly&quot;&gt;friendliness-to-animals LLM benchmark&lt;/a&gt;) is 2–3 orders of magnitude more cost-effective than conventional animal advocacy.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Neartermist animal advocacy works best if timelines are long. Timelines are likely not long.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Effective animal activists generally regard corporate campaigns as more effective than advocacy directed at consumers, but changing corporate practices seems less relevant for shifting society’s values.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;using-tai-to-improve-farm-animal-welfare&quot;&gt;Using TAI to improve farm animal welfare&lt;/h3&gt;

&lt;p&gt;I’m concerned about how TAI could negatively impact non-human welfare. There are some proposals on how TAI could negatively impact farm animals (e.g. by making factory farming more efficient), and on how animal activists could use TAI to make their activism more effective. I will take these proposals as a broad category rather than discussing them individually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Depends on the specific proposal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.electricsheep.is/&quot;&gt;Electric Sheep&lt;/a&gt;; &lt;a href=&quot;https://www.joinhive.org/&quot;&gt;Hive&lt;/a&gt; (see &lt;a href=&quot;https://forum.effectivealtruism.org/posts/BXxEyZNYn7Fqkcsed/transformative-ai-and-animals-animal-advocacy-under-a-post&quot;&gt;Transformative AI and Animals: Animal Advocacy Under A Post-Work Society&lt;/a&gt;); &lt;a href=&quot;https://www.openpaws.ai/&quot;&gt;Open Paws&lt;/a&gt;; &lt;a href=&quot;https://www.wildanimalinitiative.org/&quot;&gt;Wild Animal Initiative&lt;/a&gt; (see &lt;a href=&quot;https://forum.effectivealtruism.org/posts/zXhxagQKC6kxPM2Kn/transformative-ai-and-wild-animals-an-exploration&quot;&gt;Transformative AI and wild animals: An exploration&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Highly neglected.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I am generally skeptical of interventions of the form “teach people to leverage AI to do X better”, but farm animal advocacy seems sufficiently important that it might be worthwhile in this case.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Although two of my top ideas relate to post-TAI animal welfare (&lt;a href=&quot;#develop-new-plans--evaluate-existing-plans-to-improve-post-tai-animal-welfare&quot;&gt;Develop new plans / evaluate existing plans to improve post-TAI animal welfare&lt;/a&gt; and &lt;a href=&quot;#advocate-to-change-ai-training-to-make-llms-more-animal-friendly&quot;&gt;Advocate to change AI training to make LLMs more animal-friendly&lt;/a&gt;), I don’t think it’s worth focusing on farm animal welfare in particular.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Factory farming interventions only matter if factory farming still exists. Cultured meat outcompetes factory farming once you get a sufficiently strong understanding of biology (there’s no way growing a whole chicken is the cheapest possible way to create chicken-meat). We are far from that level of understanding, but I would be surprised if (aligned) TAI couldn’t figure it out.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The scale of wild animal welfare is orders of magnitude larger than that of factory farming. The case for prioritizing farm animals over wild animals is that we don’t have the power or knowledge to positively influence nature, but TAI should change the equation.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Space colonization would ultimately dominate earth-based welfare, so questions about panspermia or digital minds have a bigger expected impact.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Proposals for how to use TAI to improve animal advocacy only make sense if TAI does not cause value lock-in. If there is no value lock-in, then there’s no strong reason to spend time &lt;em&gt;now&lt;/em&gt; trying to figure out how to use TAI. It would be better to wait until after TAI because at that point, we will have a much better understanding of how TAI works, and there’s no urgency.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;lobby-governments-to-include-animal-welfare-in-ai-regulations&quot;&gt;Lobby governments to include animal welfare in AI regulations&lt;/h3&gt;

&lt;p&gt;If governments put safety restrictions on advanced AI, they could also create rules about animal welfare.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Getting regulations in place would force companies’ AIs to respect animal welfare.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;One set of regulations can alter the behavior of many frontier companies.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;If companies voluntarily change their behavior, they can regress at any time with no consequences. But companies have to obey regulations.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;It’s unclear what exactly regulations could do about animal welfare. AI safety regulations, insofar as they exist (which they mostly don’t), don’t dictate how LLMs are required to behave; they dictate what companies are required to do to make LLMs safe. What is a regulatory rule that policy-makers would plausibly be on board with, that would also influence model behavior to be friendlier to animals?&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Influencing the government on animal welfare seems harder than &lt;a href=&quot;#advocate-to-change-ai-training-to-make-llms-more-animal-friendly&quot;&gt;influencing AI companies&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;traditional-animal-advocacy-targeted-at-frontier-ai-developers&quot;&gt;Traditional animal advocacy targeted at frontier AI developers&lt;/h3&gt;

&lt;p&gt;Animal advocacy orgs could use their traditional techniques, but focus on raising concern for animal welfare among AI developers. For example, buy billboards outside AI company offices or use targeted online ads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI developers become more concerned for animal welfare, and they make AI development decisions that improve the likelihood that transformative AI is good for animals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Similar to &lt;a href=&quot;#neartermist-animal-advocacy&quot;&gt;neartermist animal advocacy&lt;/a&gt;, but plausibly more cost-effective because it’s more targeted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;It’s not known whether techniques like animal welfare ads are effective in general, and they may even be particularly ineffective among demographics like AI developers.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Even if AI developers cared more about animal welfare, it’s not clear that this would carry through to their work on AI.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;In 2016, I &lt;a href=&quot;https://mdickens.me/causepri-app/#8&quot;&gt;created&lt;/a&gt; a back-of-the-envelope calculation on this idea, and the result wasn’t as good as I expected (it looked worse than standard animal advocacy, if you assume the animal advocacy propagates values into the far future). However, the numbers are outdated because we know a lot more about AI now than we did in 2016.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;research-which-alignment-strategies-are-more-likely-to-be-good-for-animals&quot;&gt;Research which alignment strategies are more likely to be good for animals&lt;/h3&gt;

&lt;p&gt;Some alignment strategies may be better or worse for non-human welfare. For example, I expect &lt;a href=&quot;https://www.lesswrong.com/w/coherent-extrapolated-volition&quot;&gt;CEV&lt;/a&gt; would be better than the current paradigm of “teach the LLM to say things that &lt;a href=&quot;https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback&quot;&gt;RLHF&lt;/a&gt; judges like”, which is better than “hard-code (&lt;a href=&quot;https://en.wikipedia.org/wiki/GOFAI&quot;&gt;GOFAI&lt;/a&gt;-style) whatever moral rules the AI company thinks are correct”.&lt;/p&gt;

&lt;p&gt;A research project could go more in-depth on which alignment techniques are most likely to be good for animals (or digital minds, etc.).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Identify promising alignment techniques, in the hope that people use those techniques. There are enough animal-friendly alignment researchers at AI companies that this might happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;To my knowledge, this question has never been studied.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Some alignment techniques may be &lt;em&gt;much&lt;/em&gt; better for animals than others.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;We have a poor understanding of what ASI will look like, which makes it very hard to say what will work for animal welfare.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;We don’t know how to align ASI to any goals at all. We can’t align AI to animal welfare until we can align AI to &lt;em&gt;something&lt;/em&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;In the world where alignment turns out to be tractable, it’s likely that there will be strong incentives shaping how ASI is aligned. The choice of whether to use (say) something-like-CEV or something-like-RLHF will be difficult to influence.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;ai-policyadvocacy-ideas&quot;&gt;AI policy/advocacy ideas&lt;/h2&gt;

&lt;h3 id=&quot;improving-us--china-relations--international-peace&quot;&gt;Improving US &amp;lt;&amp;gt; China relations / international peace&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;US and China (and other countries) need to agree not to build dangerous AI. Generically improving international cooperation, especially between the US and China, increases the chance that nations cooperate on AI (non-)development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Too many to list. Some examples: Asia Society’s Center on US–China Relations; Carnegie Endowment for International Peace; Carter Center; National Committee on United States–China Relations; US-China Policy Foundation.&lt;/p&gt;

&lt;p&gt;Orgs that work on international cooperation specifically on AI safety (although not necessarily existential risk) include: &lt;a href=&quot;https://manifund.org/projects/ai-safety-bridge-in-china-seed-funding&quot;&gt;AI Governance Exchange&lt;/a&gt;; &lt;a href=&quot;https://www.cigionline.org/programs/global-ai-risks-initiative/&quot;&gt;Global AI Risks Initiative&lt;/a&gt;; &lt;a href=&quot;https://saif.org/&quot;&gt;Safe AI Forum&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;International cooperation is likely necessary to prevent existentially risky AI from being built.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Efforts to improve cooperation have succeeded in the past; for example, the US–China Strategic and Economic Dialogue (&lt;a href=&quot;https://ncafp.org/resources/new-report-us-china-strategic-economic-dialogues/&quot;&gt;source&lt;/a&gt;).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;International cooperation has wide-ranging benefits; efforts can attract funding from many parties with varying agendas.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The route to preventing extinction is indirect, which dilutes the cost-effectiveness of this intervention.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;International cooperation is far from neglected. Marginal efforts might not make much difference.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;talk-to-international-peace-orgs-about-ai&quot;&gt;Talk to international peace orgs about AI&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most international peace orgs probably aren’t aware of how important AI regulation is, and they would likely help develop international treaties on AI if they knew it was important.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nobody that I know of.&lt;/p&gt;

&lt;p&gt;Pros and cons are largely the same as &lt;a href=&quot;#improving-us--china-relations--international-peace&quot;&gt;Improving US &amp;lt;&amp;gt; China relations / international peace&lt;/a&gt;. In addition:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;This plan is higher-leverage than simply funding international peace orgs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Unclear how to do it. What sorts of evidence would the orgs find persuasive? Which orgs are best suited to working on AI-related cooperation? Those questions are answerable, but I’m not in a good position to answer them.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;increasing-government-expertise-about-ai&quot;&gt;Increasing government expertise about AI&lt;/h3&gt;

&lt;p&gt;Talk to policy-makers or create educational materials about how AI works, or help place AI experts in relevant policy roles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Policy-makers can do a more effective job of regulating AI if they understand it better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Increasing government expertise may improve the quality of AI regulations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The experts need to actually care about x-risk. Plenty of experts want to accelerate AI development/prevent regulation. For extant projects designed to increase AI expertise, I am skeptical that the expertise would be appropriately x-risk-oriented.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;In practice, “hire experts” often means “hire current or former AI company employees”, which is a recipe for regulatory capture. I expect this would significantly &lt;em&gt;decrease&lt;/em&gt; our chance of getting useful x-risk-reducing regulations.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Expertise is much less of a bottleneck than willingness to regulate AI. If I can spend $1 on increasing willingness or $1 on expertise, I’d much rather spend it on willingness.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;And in a sense, the AI safety community already invests way more in expertise (via policy research) than in advocacy. On the margin, we need advocacy more.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The downside of poorly-targeted AI safety regulations is that they end up hurting economic development. That’s bad, but it looks pretty trivial in a cost-benefit analysis compared to extinction.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I wrote a longer comment about this subject &lt;a href=&quot;https://forum.effectivealtruism.org/posts/p2dGt5CekxcXPYHMq/the-ai-adoption-gap-preparing-the-us-government-for-advanced?commentId=4obnjpAvjpbvNe9cS&quot;&gt;on the EA Forum&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Unless by “expertise” we’re talking about “expertise at recognizing that AI x-risk is a big problem”. In which case, yes, we need expertise.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;Right now, the main strategy for getting x-risk people into government is “pretend not to care about x-risk so you seem normal, and never voice your concerns”. In which case, what’s the point?&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;I think a better strategy is “talk to policy-makers about x-risk and straightforwardly tell them what you believe.”&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I’ve heard it argued that increasing government expertise is low tractability because government is so big, and making internal changes like that is slow. I don’t think this is a strong consideration because other approaches to AI safety are also low tractability. (I still don’t think increasing expertise is a good plan, but this particular argument seems weak.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Policy-makers are largely in the dark about x-risk, which is indeed a problem. But I don’t see clean routes to increasing AI expertise that don’t also push in the wrong direction. Raising concern about the importance of TAI has historically led people to believe things like “I need to be the one who controls TAI, so I will start a new AI company” or “we need to make sure we get TAI before China”. Generically increasing AI expertise is, in my best estimation, net harmful. For more on this, see &lt;a href=&quot;#advocacy-should-emphasize-x-risk-and-misalignment-risk&quot;&gt;Advocacy should emphasize x-risk and misalignment risk&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;policyadvocacy-in-china&quot;&gt;Policy/advocacy in China&lt;/h3&gt;

&lt;p&gt;Take any of my ideas on AI policy/advocacy, and do that in China instead of in the West.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The high-level argument for policy/advocacy in China is largely the same as the argument for prioritizing AI risk policy/advocacy in general. China is currently the #2 leading country in AI development, so it’s important that Chinese AI developers take safety seriously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://manifund.org/projects/ai-safety-bridge-in-china-seed-funding&quot;&gt;AI Governance Exchange&lt;/a&gt;; &lt;a href=&quot;https://saif.org/&quot;&gt;Safe AI Forum&lt;/a&gt; (sort of); perhaps some Chinese orgs that I’m not familiar with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Many of the ideas listed above are intended to improve the state of AI policy in the US/UK. The pros for those ideas largely also apply to equivalent projects conducted in China (modulo the obvious differences, e.g. China is not a representative democracy, so political advocacy works differently).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Probably nobody in China is reading this report. Advocating for Chinese policy as a non-Chinese person is fraught because the CCP will not trust our motivations, just as the American government would not trust a Chinese philanthropist who funds American AI safety advocacy.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;According to an &lt;a href=&quot;https://80000hours.org/career-reviews/china-specialist/&quot;&gt;80,000 Hours career review&lt;/a&gt;: “[The Chinese government] is often wary of non-governmental groups that try to bring about grassroots change. If an organisation is blacklisted, then that’s a nearly irreversible setback.”&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;A comment on my state of knowledge:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I don’t know much about the current state of AI safety in China, or what sort of advocacy might work. I did not prioritize looking into it because my initial impression is that I would require high confidence before recommending any interventions (due to the cons listed above), and I would be unlikely to achieve the necessary level of confidence in a reasonable amount of time.&lt;/p&gt;

&lt;h3 id=&quot;corporate-campaigns-to-advocate-for-safety&quot;&gt;Corporate campaigns to advocate for safety&lt;/h3&gt;

&lt;p&gt;Run public campaigns to advocate for companies to improve safety practices and call out unsafe behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Companies may improve their behavior in the interest of maintaining a good public image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.themidasproject.com/&quot;&gt;Midas Project&lt;/a&gt;; &lt;a href=&quot;https://www.morelight.ai/&quot;&gt;More Light&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Corporate campaigns have worked well in animal advocacy.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Companies are smaller than governments, which means they’re more agile and potentially easier to influence.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Companies have strong internal incentives to be unsafe. By contrast, governments don’t have a profit motive. They may be harder to move, but they have less reason to oppose safety efforts.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Making more actors safe is better than making fewer actors safe. International treaty &amp;gt; single-country regulation &amp;gt; single-company safety efforts.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Democratic governments are specifically designed to do what people want. That doesn’t always happen, but at least there are mechanisms pushing them that way. Companies are not democratic.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;There is a good chance that safety standards strong enough to prevent human extinction would pose an existential threat to AI companies (or at least would be incompatible with their current valuations). If that’s the case, then corporate campaigns will not be able to get companies to implement adequate safety measures.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, this seems similar to political advocacy, but worse.&lt;/p&gt;

&lt;h3 id=&quot;develop-ai-safetysecurityevaluation-standards&quot;&gt;Develop AI safety/security/evaluation standards&lt;/h3&gt;

&lt;p&gt;Work inside a company, as a nonprofit, or with a governmental body (NIST, ISO, etc.) to develop AI safety standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sufficiently well-written standards can define under what conditions a frontier AI is safe, and potentially enforce those conditions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.aistandardslab.org/&quot;&gt;AI Standards Lab&lt;/a&gt;; various AI companies; various governmental bodies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Companies might abide by the standards.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Standards can provide a template with which to write regulations.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Standards are not currently the bottleneck to getting regulation written. We already have a substantial amount of work on AI safety standards, but we still don’t have good regulations. There aren’t people waiting around to write legislation if only they had some standards they could use. (Of course, more standards would still be better.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;If standards are voluntary, companies can stop abiding by them when they turn out to be hard to satisfy (and in fact, companies have already done that on multiple occasions, with respect to their self-imposed safety standards).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Standards agencies such as NIST generally don’t have enforcement power.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Nobody knows how to write standards that will prevent extinction if implemented. See &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.7z7zpwkbjigx&quot;&gt;Appendix&lt;/a&gt; for more on this.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;slow-down-chinese-ai-development-via-ordinary-foreign-policy&quot;&gt;Slow down Chinese AI development via ordinary foreign policy&lt;/h3&gt;

&lt;p&gt;Both the “slow down AI” crowd and the “maintain America’s lead” crowd agree that it is good for China’s AI development to slow down. The American government could accomplish this using foreign policy levers such as:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;restricting chip exports;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;encouraging scientists to immigrate to the US.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On my view, slowing down Chinese AI development is good because it gives the leading AI developers in the United States more room to slow down. It also makes it less likely that a Chinese company develops misaligned TAI (although right now, US companies are more likely to develop TAI first).&lt;/p&gt;

&lt;p&gt;Slowing down Chinese AI development looks good on the “maintain America’s lead” view, although I believe this view is misguided—making TAI safe is much more important than making one country build it before another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://cset.georgetown.edu/&quot;&gt;Center for Security and Emerging Technology&lt;/a&gt; has written memos recommending similar interventions. I expect there are some other orgs doing similar activities, including orgs that are more concerned about national security than AI risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Looks (plausibly) good on multiple worldviews.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Some policies could antagonize China and make cooperation more difficult.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;Mutual sabotage between the US and China would probably decrease AI x-risk, but it would also have negative effects. I’d rather the countries increase safety via mutual cooperation.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;This depends on the policy. Improving AI model security is fine, relaxing immigration restrictions is probably fine, but export restrictions or tariffs would likely heighten international tensions. (The United States already has export restrictions and tariffs, but it might be bad to add more on the margin.)&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Advocating for these sorts of foreign policy interventions is likely not cost-effective because they’re in controversial political areas that already see significant funding and effort. For example, there are already strong and well-funded interests arguing both for and against immigration.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;On the “slow down AI” view, this seems less promising than domestic US regulation because US-based companies look significantly more likely than China to be the first to build misaligned ASI.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;whistleblower-protectionsupport&quot;&gt;Whistleblower protection/support&lt;/h3&gt;

&lt;p&gt;Provide legal support for whistleblowers inside AI companies and assist in publicizing whistleblowers’ findings (e.g. setting up press interviews).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A few ways this could help:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Whistleblowers can force companies to change their unsafe behavior.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The possibility of whistleblowers incentivizes companies to be safe.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Publicizing companies’ bad behavior can raise public concern about AI safety.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.morelight.ai/&quot;&gt;More Light&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Whistleblower support could be high-leverage: the whistleblowers themselves bring the important information, but they often can’t do much without help.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Only helps if there are things worth whistleblowing on. There might not be any warning shots (see &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.idfhvmca2skk&quot;&gt;When is the right time for advocacy?&lt;/a&gt; for some relevant discussion).&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;OpenAI’s bad behavior on secret NDAs was whistleblow-worthy, but it wasn’t directly related to AI risk, and it’s not clear that the news about OpenAI’s bad behavior decreased x-risk.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Other paths to impact are more direct, e.g. &lt;a href=&quot;#media-about-dangers-of-ai&quot;&gt;media projects&lt;/a&gt; or &lt;a href=&quot;#corporate-campaigns-to-advocate-for-safety&quot;&gt;corporate campaigns&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;opinion-polling&quot;&gt;Opinion polling&lt;/h3&gt;

&lt;p&gt;Run polls to learn public opinion on AI safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Polls can inform policy-makers about what their constituents want. They also inform people working on AI safety about where their views most align with the public, which can help them prioritize.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://theaipi.org/&quot;&gt;AI Policy Institute&lt;/a&gt;; traditional polling agencies (Pew and YouGov have done polls on people’s views on AI).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;When talking to policy-makers, it’s useful to be able to point to polls as evidence that the public cares about AI safety.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Polls provide common knowledge of concern for AI risk.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;A good amount of polling already exists. If we already know that people in 2024 were concerned about AI risk, there’s not as much value in knowing that they’re still concerned in 2025.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The main benefit of polls is to empower advocacy, but we have precious little advocacy right now.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;help-ai-company-employees-improve-safety-within-their-companies&quot;&gt;Help AI company employees improve safety within their companies&lt;/h3&gt;

&lt;p&gt;Work with people in AI companies (by organizing conferences, peer support, etc.) to help them learn about good safety practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI company employees can push leadership to implement stronger internal safety standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.aileadershipcollective.com/&quot;&gt;AI Leadership Collective&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If internal employees work more on safety, that will make AI companies safer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;This theory of change has the same issues as &lt;a href=&quot;#corporate-campaigns-to-advocate-for-safety&quot;&gt;corporate campaigns&lt;/a&gt;: companies have strong incentives to be unsafe; global (or at least national) safety measures are better than single-company measures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Working with AI company employees might be sufficiently high-leverage to make up for my concerns. Answering that question would require going more in-depth, so I will go with my intuition and say it’s probably not as cost-effective as my top ideas.&lt;/p&gt;

&lt;h3 id=&quot;direct-talks-with-ai-companies-to-make-them-safer&quot;&gt;Direct talks with AI companies to make them safer&lt;/h3&gt;

&lt;p&gt;If you can &lt;a href=&quot;#talk-to-policy-makers-about-ai-x-risk&quot;&gt;talk to policy-makers about AI x-risk&lt;/a&gt;, then maybe you can also talk to AI company executives about AI safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI company execs have significant control over the direction of AI. If they started prioritizing safety to a significantly greater extent, they could probably do a lot to decrease x-risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To my knowledge, nobody is systematically working on this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;High-leverage: you may be able to prevent extinction by changing the minds of a half-dozen or so people.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Similar issues to &lt;a href=&quot;#corporate-campaigns-to-advocate-for-safety&quot;&gt;corporate campaigns&lt;/a&gt;: governments have less incentive to be unsafe; it’s better to make all companies safe simultaneously (via regulation). Therefore, political action seems more promising.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The CEOs of most AI companies are well aware that AI poses an extinction risk, but they are building it anyway, and they are massively under-investing in safety anyway. It’s not clear what additional information would change their minds. So this seems worse than corporate campaigns.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;People in relevant positions should push companies to be safer in ways that they can, but I don’t see any way to support this intervention as a philanthropist.&lt;/p&gt;

&lt;h3 id=&quot;monitor-ai-companies-on-safety-standards&quot;&gt;Monitor AI companies on safety standards&lt;/h3&gt;

&lt;p&gt;Track how well each frontier AI company does model risk assessments, security, misuse prevention, and other safety-relevant behaviors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Monitoring AI companies can inform policy-makers and the public about the state of company safety. Monitoring could have a similar effect as &lt;a href=&quot;#corporate-campaigns-to-advocate-for-safety&quot;&gt;corporate campaigns&lt;/a&gt;, where it pushes companies to be safer, or it could even directly inform corporate campaigns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://ailabwatch.org/&quot;&gt;AI Lab Watch&lt;/a&gt; &amp;amp; &lt;a href=&quot;https://aisafetyclaims.org/&quot;&gt;AI Safety Claims Analysis&lt;/a&gt;;  Future of Life Institute’s &lt;a href=&quot;https://futureoflife.org/ai-safety-index-summer-2025/&quot;&gt;AI Safety Index&lt;/a&gt;; &lt;a href=&quot;https://www.themidasproject.com/&quot;&gt;Midas Project&lt;/a&gt;; &lt;a href=&quot;https://www.safer-ai.org/&quot;&gt;Safer AI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Easy to do: a solo developer can run a monitoring website as a side project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The theory of change seems somewhat weak to me. There are other ways to demonstrate AI risks to policy-makers and the public, and it’s not clear that this way is particularly good.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;But I don’t know what kind of impact the monitoring websites have had; maybe they’ve had some big positive influences that I don’t know about.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;create-a-petition-or-open-letter-on-ai-risk&quot;&gt;Create a petition or open letter on AI risk&lt;/h3&gt;

&lt;p&gt;Write a petition raising concern about AI risk or calling for action (such as a &lt;a href=&quot;https://futureoflife.org/open-letter/pause-giant-ai-experiments/&quot;&gt;six-month pause&lt;/a&gt; or an &lt;a href=&quot;https://aitreaty.org/&quot;&gt;international treaty&lt;/a&gt;). Get respected figures (AI experts, etc.) to sign the petition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A petition can make it apparent to policy-makers and the public that many people/experts are concerned about AI risk, while also creating common knowledge among concerned people that they are in good company if they speak up about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://aitreaty.org/&quot;&gt;aitreaty.org&lt;/a&gt;; &lt;a href=&quot;https://www.safe.ai/&quot;&gt;Center for AI Safety / CAIS Action Fund&lt;/a&gt;; &lt;a href=&quot;https://futureoflife.org/&quot;&gt;Future of Life Institute / FLI Action and Research, Inc.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Brings AI risk into the public conversation and makes it easier to talk about.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Relatively easy to do—the main difficulty is in finding people to sign it who can bring credibility.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The biggest con is that petitions have diminishing marginal utility, and several have been made already.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The path from a petition to concrete outcomes isn’t entirely clear (although I’m inclined to believe that petitions can work).&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My sense is that the existing petitions have been quite helpful, but there isn’t clear value in creating &lt;em&gt;more&lt;/em&gt; petitions. There may be some specific call to action that a new petition ought to put forward, but I’m not sure what that would be.&lt;/p&gt;

&lt;h3 id=&quot;create-demonstrations-of-dangerous-ai-capabilities&quot;&gt;Create demonstrations of dangerous AI capabilities&lt;/h3&gt;

&lt;p&gt;Make AI risk concrete by building concrete demonstrations of how AI can be dangerous.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many people don’t find it plausible that AI could cause harm; concrete demonstrations may change their minds. It can also serve to make AI risk more visceral.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://far.ai/&quot;&gt;FAR.AI&lt;/a&gt;; &lt;a href=&quot;https://palisaderesearch.org/&quot;&gt;Palisade Research&lt;/a&gt;; some one-off work by others (e.g. &lt;a href=&quot;https://apartresearch.com/sprints/ai-capabilities-and-risks-demo-jam-2024-08-23-to-2024-08-26&quot;&gt;Apart Research&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Concrete demonstrations can aid advocacy efforts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Capability demonstrations may send the wrong message, encouraging accelerationism instead of caution.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;I am more concerned about this for general AI capability evaluations. For this project, I am specifically thinking of demonstrations of how AI can do &lt;em&gt;harm&lt;/em&gt;.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;You can’t create a concrete demonstration of superintelligent AI’s capabilities until you already have superintelligent AI, at which point it’s too late. Pre-superintelligent AIs can have scary capabilities, but demos are misleading in a sense because they may create a skewed understanding of where the risks come from.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m most optimistic about demonstrations where there is a clear plan for how to use them, e.g. Palisade Research builds its demos specifically to show to policy-makers. I think Palisade is doing a particularly good version of this idea, but for the most part, I think other ideas are better.&lt;/p&gt;

&lt;h3 id=&quot;sue-openai-for-violating-its-nonprofit-mission&quot;&gt;Sue OpenAI for violating its nonprofit mission&lt;/h3&gt;

&lt;p&gt;OpenAI:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Our mission is to ensure that artificial general intelligence […] benefits all of humanity.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;OpenAI has more-or-less straightforwardly violated this mission in various ways. Humanity plausibly has grounds to sue OpenAI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A lawsuit would change OpenAI’s incentives and may force OpenAI to actually put humanity’s interest first, depending on how well the lawsuit goes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In 2024, Elon Musk filed a lawsuit against OpenAI on this basis. The lawsuit is set to go to trial in 2026.&lt;/p&gt;

&lt;p&gt;Given that there is already an ongoing lawsuit, it may be better to support the existing suit (e.g. by writing an amicus brief or by offering expert testimony) than to start a new one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A lawsuit could force OpenAI to significantly improve safety.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;A failed lawsuit has numerous downsides—it can make the plaintiff look bad; it can set an unfavorable precedent; it’s expensive.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Legal matters may have other, hard-to-predict downsides, and I’m not qualified to evaluate them.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I am not a lawyer, but my impression is that courts are typically quite lenient about what nonprofits are allowed to do, so it would be difficult for a lawsuit to succeed.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Given that there is an ongoing lawsuit by Elon Musk, who is known to behave erratically, Musk may do something unpredictable that causes harm.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;send-people-ai-safety-books&quot;&gt;Send people AI safety books&lt;/h3&gt;

&lt;p&gt;Books are a tried-and-true method of explaining complex ideas. One could mail books on AI risk to Congress people, or staffers, or AI company execs, or other relevant people.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A book can explain in detail why AI risk is a big deal and thus persuade people that it’s a big deal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;MIRI &lt;a href=&quot;https://www.lesswrong.com/posts/CYTwRZtrhHuYf7QYu/a-case-for-courage-when-speaking-of-ai-danger&quot;&gt;did something similar&lt;/a&gt; when promoting their new book: “We cold-emailed a bunch of famous people (like Obama and Oprah)”. They were asking people to write blurbs for the book, which isn’t exactly what I had in mind, but it’s related.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Aidar Toktargazin has a &lt;a href=&quot;https://manifund.org/projects/giving-free-ai-safety-books-for-potentially-high-impact-individuals&quot;&gt;Manifund project&lt;/a&gt; to give out AI safety books to researchers and professors at Nazarbayev University in Kazakhstan.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Mailing books is cheap.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Mailing books seems riskier than other kinds of advocacy—it could be viewed as excessively pushy.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;Mormons give out free books, and they’re viewed as pushy, but they’ve also grown a lot. It’s unclear whether Mormons’ publicity strategies are worth emulating.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;It may be better to let publishers do their own publicity.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This idea &lt;em&gt;might&lt;/em&gt; be really good, but it’s high-variance. I would not recommend it without significantly investigating the possible downsides first.&lt;/p&gt;

&lt;h2 id=&quot;ai-research-ideas&quot;&gt;AI research ideas&lt;/h2&gt;

&lt;p&gt;I said I wasn’t going to focus on technical research or policy research, but I did incidentally come up with a few under-explored ideas. These are research projects that I’d like to see more work on, although I still believe advocacy is more important.&lt;/p&gt;

&lt;h3 id=&quot;research-on-how-to-get-people-to-extrapolate&quot;&gt;Research on how to get people to extrapolate&lt;/h3&gt;

&lt;p&gt;A key psychological mistake: “superintelligent AI has never caused extinction before, therefore it won’t happen.” Or: “AI is not currently dangerous, therefore it will never be dangerous.”&lt;/p&gt;

&lt;p&gt;Compare: “Declaring a COVID emergency is silly; there are currently zero cases in San Francisco.” (I am slightly embarrassed to say that that is a thought I had in February 2020.)&lt;/p&gt;

&lt;p&gt;Relatedly, some people expect there will not be much demand for AI regulation until we see a “warning shot”. Perhaps, but I’m concerned we will run into this failure-to-extrapolate phenomenon. AI has already demonstrated alignment failures (Bing Sydney comes to mind; or GPT 4o’s absurd sycophancy; or numerous xAI/Grok incidents). But clear examples of misalignment get fixed (because AI is still dumb enough for us to control). So people may draw the lesson that misalignment is fixable, and there may keep being progressively bigger incidents until we finally build an AI powerful enough to kill everyone.&lt;/p&gt;

&lt;p&gt;I am concerned that the concept of AI x-risk will never be able to get sufficient attention due to this psychological mistake. Therefore, we need to figure out how to get people to stop making this mistake.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many people ignore AI x-risk because of this mistake. If we knew how to get people to extrapolate, we could use that knowledge to improve communication on AI risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some academic psychologists have done related research.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If successful, this research would significantly increase how many people take AI risk seriously.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;We know from the history of psychology that it’s difficult to find psychological insights. I’d guess it would cost tens or hundreds of millions of dollars to produce meaningful results (if it’s even possible at all).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Psychology research takes a long time to pay off. That doesn’t work if timelines are short.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;This research is not particularly neglected. The American Psychological Association &lt;a href=&quot;https://www.apa.org/news/press/releases/2022/02/psychology-climate-change&quot;&gt;wants&lt;/a&gt; more research on how to get people to care about climate change, which is related.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;To the extent that science has already uncovered answers to this question, good communicators already know those answers. For example, studies have found that people are more likely to pay attention to future problems when you give concrete scenarios; but a good writer already does that. (See Deep Research (&lt;a href=&quot;https://claude.ai/share/b91fca37-ce74-46b3-a3e1-379d0d937aff&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;https://chatgpt.com/share/685c8531-8870-8011-bb4d-dcd765ba7d43&quot;&gt;2&lt;/a&gt;) for an attempt at finding relevant psychology studies, although only about a quarter of them are actually relevant.)&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rather than spending $50 million on psychology research and then $50 million on a variety of psychologically-motivated media projects informed by that research, I would rather just spend $100 million on media projects.&lt;/p&gt;

&lt;h3 id=&quot;investigate-how-to-use-ai-to-reduce-other-x-risks&quot;&gt;Investigate how to use AI to reduce other x-risks&lt;/h3&gt;

&lt;p&gt;An important argument against slowing down AI development is that we could use advanced AI to reduce other x-risks (climate change, nuclear war, etc.).&lt;/p&gt;

&lt;p&gt;But an aligned AI wouldn’t &lt;em&gt;automatically&lt;/em&gt; reduce x-risk. It may increase technological risks (e.g. synthetic biology) if offensive capabilities outscale defensive ones. An aligned AI could reduce nuclear risk by improving global coordination, but it’s not &lt;em&gt;obvious&lt;/em&gt; that it would.&lt;/p&gt;

&lt;p&gt;Therefore, it may be worth asking: Are there some paths of AI development that differentially reduce non-AI x-risk?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Research on how to direct AI may inform efforts by the developers of advanced AI, which may ultimately reduce x-risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nobody, to my knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;If you believe we need to build TAI to avert non-AI x-risks, then it stands to reason that you should also want to know how to direct TAI to accomplish that end.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;This line of research is highly neglected (to my knowledge, there are zero people working on it).&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Non-AI x-risks seem less concerning than AI x-risk, so it seems better to work directly on reducing AI x-risk.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;If you believe we should slow down AI development, then this line of research doesn’t matter as much. And I do believe we should slow down AI development, and I believe that a wide range of worldviews should agree with me on that (see Appendix: &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0&quot;&gt;A moratorium is the best outcome&lt;/a&gt;).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;At a glance, the problem seems difficult to make progress on (see below).&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;My initial thoughts on this line of research:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;You can’t control what general AI is good at. It would be good at everything. There is no known way to make it (say) good at defending against biological weapons, but bad at creating biological weapons.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;A narrow “cooperation superintelligence” would be better than a narrow “scientist superintelligence” because the latter increases technological x-risk. But based on current trends in AI, my guess is that we could develop a “scientist ASI” that’s bad at cooperation, but we couldn’t develop a “cooperation ASI” that’s bad at science. So this idea is likely a dead end.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Even if we could build a “cooperation ASI”, we still need to solve alignment problems first. So it seems better to focus on solving alignment.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;a-short-timelines-alignment-plan-that-doesnt-rely-on-bootstrapping&quot;&gt;A short-timelines alignment plan that doesn’t rely on bootstrapping&lt;/h3&gt;

&lt;p&gt;To my knowledge, every major AI alignment plan depends on alignment bootstrapping, i.e., using AI to solve AI alignment. I am skeptical that bootstrapping will work, and even if you think it will probably work (with, say, 90% credence), you should still want a contingency plan.&lt;/p&gt;

&lt;p&gt;Write a research agenda for how to solve AI alignment &lt;em&gt;without&lt;/em&gt; using bootstrapping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If people come up with sufficiently good plans, then we might solve alignment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Peter Gebauer is running a &lt;a href=&quot;https://manifund.org/projects/contest-for-better-short-timeline-agi-safety-plans-&quot;&gt;contest&lt;/a&gt; for short-timelines AI safety plans, but the plans are allowed to depend on bootstrapping (e.g. Gebauer favorably cites &lt;a href=&quot;https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/evaluating-potential-cybersecurity-threats-of-advanced-ai/An_Approach_to_Technical_AGI_Safety_Apr_2025.pdf&quot;&gt;DeepMind’s plan&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;There are some plans that say something like “stop developing AI until we solve alignment” (ex: &lt;a href=&quot;https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction&quot;&gt;MIRI&lt;/a&gt;; &lt;a href=&quot;https://www.narrowpath.co/&quot;&gt;Narrow Path&lt;/a&gt;), which is valid (and I agree), but it’s not a technical plan.&lt;/p&gt;

&lt;p&gt;The closest thing I’ve seen is &lt;a href=&quot;https://www.lesswrong.com/posts/HfqbjwpAEGep9mHhc/the-plan-2023-version&quot;&gt;John Wentworth’s research agenda&lt;/a&gt;, but it specifically invokes the &lt;a href=&quot;https://knowyourmeme.com/memes/profit&quot;&gt;underpants gnome meme&lt;/a&gt;, i.e., the plan has a huge hole in the middle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Existing plans are insufficiently rigorous.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;To my knowledge, there are zero meaningful plans that don’t rely on alignment bootstrapping. If bootstrapping turns out not to work, every plan fails. There is a gap to be filled.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;It is highly unlikely that any satisfactory plan exists.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;I think trying to create plans is a reasonable idea on the off chance that somebody &lt;em&gt;does&lt;/em&gt; come up with a good plan, but I don’t think it’s a good use of marginal philanthropic resources.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;An alignment plan still leaves &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.o881tulnpfpa&quot;&gt;non-alignment problems&lt;/a&gt; unsolved.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;AI companies cannot be trusted to implement a safe plan even if one exists.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;But the plan existing does increase the chance that companies follow the plan, or that external pressures can force companies to follow it.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;rigorous-analysis-of-the-various-ways-alignment-bootstrapping-could-fail&quot;&gt;Rigorous analysis of the various ways alignment bootstrapping could fail&lt;/h3&gt;

&lt;p&gt;I’m pessimistic about the prospects of alignment bootstrapping, and I’ve seen various AI safety researchers express similar skepticism, but I’ve never seen a rigorous analysis of the concerns with alignment bootstrapping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory of change:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI companies with AI safety plans are all expecting bootstrapping to work. A thorough critique could convince AI companies to develop better plans, or create more of a consensus among ML researchers that bootstrapping is inadequate, or convince policy-makers that they need to make AI companies be safer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who’s working on it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nobody, to my knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;This sort of analysis would be feasible to write—it would require expertise and time investment, but it doesn’t require novel research.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The theory of change seems weak—if companies haven’t already figured out that their plans are inadequate, then I doubt that more criticism is going to change their minds.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Independent of AI companies’ top-down plans, it’s helpful if you can better inform alignment researchers about what they should be focusing on. But my guess is an analysis like this wouldn’t shift much work.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;future-work&quot;&gt;Future work&lt;/h1&gt;

&lt;h2 id=&quot;pros-and-cons-of-slowing-down-ai-development-with-numeric-credences&quot;&gt;Pros and cons of slowing down AI development, with numeric credences&lt;/h2&gt;

&lt;p&gt;I addressed this to some extent, but I could’ve gone into more detail, and I didn’t do any numeric analysis.&lt;/p&gt;

&lt;p&gt;I would like to see a more formal model that includes how the tradeoff changes based on considerations like:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;One’s view of population ethics&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The importance of preventing deaths vs. causing births&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Temporal discount rate or longtermism&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;P(doom)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The extent to which P(doom) is reduced if AI development slows down&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Existential risk from sources other than AI (see &lt;a href=&quot;#quantitative-model-on-ai-x-risk-vs-other-x-risks&quot;&gt;Quantitative model on AI x-risk vs. other x-risks&lt;/a&gt;)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;How AI development interacts with other x-risks&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Relatedly, how does P(doom) change what actions you’re willing to take? I often see people assume that at a P(doom) of, say, 25%, pausing AI development is bad. That seems wrong to me. I believe that at 25% you should be about as aggressive (about pushing for mitigations) as you would be at 95%, although I haven’t put in the work to come up with a detailed justification for this position. The basic argument is that x-risk looks very bad on longtermist grounds, and a delay of even (say) 100 years doesn’t look like as big a deal.&lt;/p&gt;

&lt;h2 id=&quot;quantitative-model-on-ai-x-risk-vs-other-x-risks&quot;&gt;Quantitative model on AI x-risk vs. other x-risks&lt;/h2&gt;

&lt;p&gt;There is an argument that we need TAI soon because otherwise we are likely to kill ourselves via some other x-risk. I have a rough idea for how I could build a quantitative model to test under what assumptions this argument works. Building that model wasn’t a priority for this report, but I could do it without much additional effort.&lt;/p&gt;

&lt;p&gt;The basic elements the model needs are&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;X-risk from AI&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;X-risk from other sources&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;How to estimate these? Expert forecasts &lt;a href=&quot;https://forum.effectivealtruism.org/posts/Kuf5Nn6qNCp2kyYvo/is-it-so-much-to-ask-for-a-nice-reliable-aggregated-x-risk&quot;&gt;seem unreliable&lt;/a&gt;.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;How much we can reduce AI x-risk by delaying development&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;X-risk from other sources, conditional on TAI&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;
        &lt;p&gt;For an aligned totalizing TAI singleton that quickly controls the world, x-risk would be ~0.&lt;/p&gt;
      &lt;/li&gt;
      &lt;li&gt;
        &lt;p&gt;TAI doesn’t trivially decrease other x-risks; it could increase x-risk by accelerating technological growth (which means it’s easier to build dangerous technology—see &lt;a href=&quot;https://nickbostrom.com/papers/vulnerable.pdf&quot;&gt;The Vulnerable World Hypothesis&lt;/a&gt;). The mechanism of decreasing x-risk isn’t that TAI is smarter; the mechanism is that it could increase global coordination / centralize the ability to make dangerous technology.&lt;/p&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;deeper-investigation-of-the-ai-arms-race-situation&quot;&gt;Deeper investigation of the AI arms race situation&lt;/h2&gt;

&lt;p&gt;Some open questions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;What are some historical examples of arms races that were successfully aborted? What happened to make things go well?&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;How hard are different factions racing, and what would it take to convince them to slow down?&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;How likely are we to end up in a bad totalitarian regime post-TAI if various parties end up “winning the race” and building an alignable ASI?&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;does-slowing-downpausing-ai-help-solve-non-alignment-problems&quot;&gt;Does slowing down/pausing AI help solve non-alignment problems?&lt;/h2&gt;

&lt;p&gt;Pausing (or at least slowing down) clearly gives us more time to solve alignment, and has clear downsides in terms of opportunity cost. But other effects are less clear. Pausing may help with some other big problems: &lt;a href=&quot;https://forum.effectivealtruism.org/posts/2cZAzvaQefh5JxWdb/bringing-about-animal-inclusive-ai&quot;&gt;animal-inclusive AI&lt;/a&gt;; &lt;a href=&quot;https://eleosai.org/post/research-priorities-for-ai-welfare/&quot;&gt;AI welfare&lt;/a&gt;; &lt;a href=&quot;https://longtermrisk.org/research-agenda&quot;&gt;S-risks from conflict&lt;/a&gt;; &lt;a href=&quot;https://www.lesswrong.com/posts/GAv4DRGyDHe2orvwB/gradual-disempowerment-concrete-research-projects&quot;&gt;gradual disempowerment&lt;/a&gt;; &lt;a href=&quot;https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors&quot;&gt;risks from malevolent actors&lt;/a&gt;; &lt;a href=&quot;https://forum.effectivealtruism.org/posts/HqmQMmKgX7nfSLaNX/moral-error-as-an-existential-risk&quot;&gt;moral error&lt;/a&gt;. There are some arguments for and against pausing being useful for these non-alignment problems; for more on this topic, see &lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/edit?tab=t.0#bookmark=kix.o881tulnpfpa&quot;&gt;Slowing down is a general-purpose solution to every non-alignment problem&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I have never seen an attempt to analyze why pausing AI development might or might not help with non-alignment problems; this seems like an important question.&lt;/p&gt;

&lt;p&gt;Some considerations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;We don’t want to build TAI until we become more &lt;a href=&quot;https://forum.effectivealtruism.org/posts/hhyjbjwN96NWRSvv7/clarifying-wisdom-foundational-topics-for-aligned-ais-to&quot;&gt;wise&lt;/a&gt;, but it’s not clear that we &lt;em&gt;can&lt;/em&gt; become more wise, or perhaps TAI would be wiser than we would.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Pausing may increase misuse risk or some related risk.&lt;/p&gt;

    &lt;ul&gt;
      &lt;li&gt;One conceivable outcome, albeit one that doesn’t seem particularly likely, is that AI companies become increasingly wealthy and powerful by selling pre-TAI AI services, and this concentration of power ultimately allows them to build TAI in a way that goes against most people’s interests.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Peace and democratic governance have been trending upward over the past century (see &lt;a href=&quot;https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature&quot;&gt;The Better Angels of Our Nature&lt;/a&gt;). Slowing/pausing means the world will probably be more peaceful and democratic when we get TAI, which is probably desirable (less chance of power struggle, etc.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Moral circles have expanded over time (although they haven’t strictly expanded—see Gwern’s &lt;a href=&quot;https://gwern.net/narrowing-circle&quot;&gt;The Narrowing Circle&lt;/a&gt;). It’s better to develop TAI when moral circles are wider.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;determine-when-will-be-the-right-time-to-push-for-strong-restrictions-on-ai-if-not-now&quot;&gt;Determine when will be the right time to push for strong restrictions on AI (if not now)&lt;/h2&gt;

&lt;p&gt;A common view: “We should push for strong restrictions on AI, but now is not the right time.”&lt;/p&gt;

&lt;p&gt;I disagree with this view; I think now &lt;em&gt;is&lt;/em&gt; the right time. But suppose it isn’t. When will be the right time?&lt;/p&gt;

&lt;p&gt;Consider the tradeoff wherein if you do advocacy later, then&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;the risks of AI will be more apparent;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;but there’s a greater chance that you’re too late to do anything about it.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Is there some inflection point where the first consideration starts outweighing the second?&lt;/p&gt;

&lt;p&gt;And how do you account for uncertainty? (Uncertainty means you should do advocacy earlier, because being too late is much worse than being too early.)&lt;/p&gt;

&lt;p&gt;I don’t think this question is worth trying to answer because I am sufficiently confident that now is the right time. But I think this is an important question from the view that now is too early.&lt;/p&gt;

&lt;h1 id=&quot;supplements&quot;&gt;Supplements&lt;/h1&gt;

&lt;p&gt;I have written two supplements in separate docs:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;https://docs.google.com/document/d/1w1vVTiihUTqFye2hIaoGuqJgw-G5LzeQ8x0yoPQ-Ilg/&quot;&gt;Appendix&lt;/a&gt;: Some miscellaneous topics that weren’t quite relevant enough to include in the main text.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;https://docs.google.com/document/d/1vWB5CgH69W4lmpZrCXaD3n2Jqz32kVnvCJwUA2RE8Fw/&quot;&gt;List of relevant organizations&lt;/a&gt;: A reference list of orgs doing work in AI-for-animals or AI policy/advocacy, with brief descriptions of their activities.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For example, almost every frontier AI company opposed SB-1047; Anthropic supported the bill conditional on amendment, and Elon Musk supported it but xAI did not take any public position. See &lt;a href=&quot;https://chatgpt.com/share/68b20f49-1ae8-8011-881b-1b2747818a05&quot;&gt;ChatGPT&lt;/a&gt; for a compilation of sources. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I asked ChatGPT Deep Research to tally up funding for research vs. policy and it &lt;a href=&quot;https://chatgpt.com/share/685885d3-f564-8011-90fb-9b7fb46d774f&quot;&gt;found&lt;/a&gt; ~2x as many researchers as policy people and also 2x the budget, although it miscounted some things; most of what it counted as “AI policy” is (1) unrelated to x-risk and (2) policy research, not policy advocacy; and it only included big orgs (e.g. it missed the long tail of independent alignment researchers). So I believe the true ratio is even more skewed than 2:1. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;According to my research, it’s not difficult to find examples of times when UK policy influenced US policy, but it’s still unclear to me how strong this effect is. There are also some theoretical arguments for and against the importance of UK AI policy, but I didn’t find any of them particularly compelling, so I remain agnostic. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I find this state of affairs confusing given how many people profess belief in short timelines. I think part of the reason is that people involved in AI safety tend to be intellectual researcher-types (like me, for example) who are more likely to orient their work toward “what is going to improve the state of knowledge?” rather than “what is likely to pay off in the near future?” &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Healthy Cooking Tips from a Lazy Person</title>
				<pubDate>Fri, 29 Aug 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/08/29/lazy_cooking_tips/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/08/29/lazy_cooking_tips/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;img src=&quot;/assets/images/chopping-onion.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://xcancel.com/naledimashishi/status/1494352227456233476&quot;&gt;source&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The problem with most “lazy cooking” advice is that it’s not lazy enough. Today I bring you some truly lazy ways of eating healthy.&lt;/p&gt;

&lt;p&gt;This is the advice that I would’ve liked to hear when I was a lazy teenager. I’m still lazy, but I’m better at making food now. (I’m not going to say I’m better at cooking, because the way I make most food could only very generously be described as “cooking”.)&lt;/p&gt;

&lt;p&gt;All my lazy meals are vegan because I’m vegan, but if anything, that works to my advantage because the easiest animal foods still take more work than the easiest plant foods. (You can eat raw vegetables but you can’t eat raw chicken.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#healthy-foods-that-require-no-preparation-whatsoever&quot; id=&quot;markdown-toc-healthy-foods-that-require-no-preparation-whatsoever&quot;&gt;Healthy foods that require no preparation whatsoever&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#healthy-foods-that-take-less-than-one-minute-of-preparation&quot; id=&quot;markdown-toc-healthy-foods-that-take-less-than-one-minute-of-preparation&quot;&gt;Healthy foods that take less than one minute of preparation&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#cooking-tips&quot; id=&quot;markdown-toc-cooking-tips&quot;&gt;Cooking tips&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;healthy-foods-that-require-no-preparation-whatsoever&quot;&gt;Healthy foods that require no preparation whatsoever&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Nuts and seeds. Buy a bag and eat them out of the bag.
    &lt;ul&gt;
      &lt;li&gt;Or buy trail mix for more variety.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Nut butter. You can eat it right out of the jar if you want to.
    &lt;ul&gt;
      &lt;li&gt;Some people are under the misconception that the big-brand peanut butters like Jif and Skippy are bad for you because they contain sugar. The Jif that’s in my cabinet right now only gets 7% of its calories from sugar, and that little bit of sugar makes it taste 1000% better. That’s a flavor to sugar ratio of 14,285%; you can’t argue with the math.&lt;/li&gt;
      &lt;li&gt;Some people believe peanut butter is bad for you because it contains a lot of fat. Trans fats and saturated fats are the “bad fats”;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; peanut butter is made of unsaturated fats, which are the “good fats”.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Many fruits can be eaten with no prep or with very little prep. You have to peel bananas, but peeling a banana is no harder than opening a candy wrapper.&lt;/li&gt;
  &lt;li&gt;A lot of vegetables can be eaten raw. They taste better when you cook and season them, but sometimes you have to sacrifice flavor in the name of laziness.&lt;/li&gt;
  &lt;li&gt;There is nothing wrong with eating tofu raw. But when it comes to zero-prep soy-based foods, my go-to is dry roasted edamame.&lt;/li&gt;
  &lt;li&gt;Soylent and Huel aren’t exactly &lt;em&gt;healthy&lt;/em&gt;, but they’re not &lt;em&gt;not&lt;/em&gt; healthy, either.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;healthy-foods-that-take-less-than-one-minute-of-preparation&quot;&gt;Healthy foods that take less than one minute of preparation&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Get some vegetables (carrots or broccoli) and dip them in hummus.&lt;/li&gt;
  &lt;li&gt;Pour a bowl of cereal.
    &lt;ul&gt;
      &lt;li&gt;Breakfast cereals are often bad for you, but there are some good ones. Last year I reviewed &lt;a href=&quot;https://mdickens.me/2025/01/17/high_protein_breakfast_cereals/&quot;&gt;high-protein breakfast cereals&lt;/a&gt;, all of which I would describe as healthy. There are also many low-protein but still healthy cereals, for example Cheerios are made of whole oats.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Buttered toast is cool, but Big Toaster doesn’t want you to know that buttered untoasted bread is maybe even better.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Three bean recipes in increasing order of prep time + flavorfulness:
    &lt;ol&gt;
      &lt;li&gt;Open can of beans; eat straight out of the can. I personally would use a spoon, but if you’d rather pour the beans directly into your mouth, I won’t judge.&lt;/li&gt;
      &lt;li&gt;Open can of beans; pour into bowl; add some kind of seasoning; eat.
        &lt;ul&gt;
          &lt;li&gt;Some seasoning ideas: hot sauce; garlic powder; Chesapeake Bay seasoning; garlic &amp;amp; herb seasoning mix (like &lt;a href=&quot;https://www.amazon.com/McCormick-Salt-Free-Garlic-Seasoning/dp/B08KRC7V7J&quot;&gt;this&lt;/a&gt;).&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;Do #2, but also microwave it before eating. (I know I promised sub-minute prep times, but this recipe will take more like two minutes.)&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;cooking-tips&quot;&gt;Cooking tips&lt;/h2&gt;

&lt;p&gt;I mostly eat easy meals, but I do real cooking once every couple days—my “real cooking” mostly means “chop some stuff and throw it in an air fryer”. But sometimes I even cook things in a pot. I have a few methods for making my cooking easier.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Recipes often call for the same set of spices. Pre-mix your spices or buy them pre-mixed.
    &lt;ul&gt;
      &lt;li&gt;Curry recipes often call for garam masala, cumin, and coriander. I’m not sure what’s going on there because the main two ingredients of garam masala are cumin and coriander. When I cook a big pot of beans, I just throw in a ton of garam masala.&lt;/li&gt;
      &lt;li&gt;My most-used spices are a pre-mixed garlic &amp;amp; herb seasoning, a pre-mixed garam masala, and a pre-mixed all-purpose spice mix consisting of salt + pepper + garlic powder.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;You can buy vegetables pre-chopped if you’re willing to pay more.
    &lt;ul&gt;
      &lt;li&gt;Onions hurt my eyes a lot. I buy them pre-chopped which saves time and saves my eyes.&lt;/li&gt;
      &lt;li&gt;As a middle ground, you can buy pre-peeled garlic cloves. Peeling is much harder than chopping (for me at least) so pre-peeled garlic lets me skip the worst part.&lt;/li&gt;
      &lt;li&gt;I am not the first person to observe that most recipes don’t call for enough garlic, but I think even most people who say “recipes don’t call for enough garlic” still don’t use enough garlic. If a recipe calls for 2 cloves then I will use about 20 cloves and I’m still not sure I’m using enough. (This doesn’t have anything to do with being lazy but I need to express my garlic-related feelings.)&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Many oven or stovetop recipes can be done faster in an air fryer. An air fryer cooks food fast like a microwave, but it makes the food crispy instead of mushy and weird.
    &lt;ul&gt;
      &lt;li&gt;I’ve heard a stereotype that Asian moms use their ovens exclusively as pot-and-pan storage. If that’s true then I guess that makes me an Asian mom.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;There are many convenient-but-unhealthy foods, too. It’s okay to eat unhealthy food sometimes.&lt;/li&gt;
&lt;/ol&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I guess you could eat raw eggs if you really wanted to. People talk about Rocky, but I’ve always associated eating raw eggs with &lt;a href=&quot;https://www.youtube.com/watch?v=cYqCtpa9_Ms&quot;&gt;the dad from The Neverending Story&lt;/a&gt;. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some people don’t even believe saturated fat is bad for you. I wrote more about this in &lt;a href=&quot;https://mdickens.me/2024/09/26/outlive_a_critical_review/#the-data-are-unclear-on-whether-reducing-saturated-fat-intake-is-beneficial&quot;&gt;my &lt;em&gt;Outlive&lt;/em&gt; review&lt;/a&gt;. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lest there be any confusion about how I previously said I was vegan: when I say “butter” what I actually mean is Earth Balance. In fact butter isn’t good for you so if I was eating real butter, bread + butter wouldn’t qualify as a healthy meal. Earth Balance is made of unsaturated fats so it’s healthy.&lt;/p&gt;

      &lt;p&gt;And of course I eat whole wheat bread, specifically Dave’s Killer Bread which is the undisputed best-tasting whole grain bread. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Doctor Strange Didn't See Only One Victory out of 14,000,605 Futures</title>
				<pubDate>Fri, 25 Jul 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/07/25/doctor_strange/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/07/25/doctor_strange/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Or, more accurately, the fact that he said the Avengers only won once can’t be taken as evidence about what he really saw.&lt;/p&gt;

&lt;p&gt;This post contains spoilers for &lt;em&gt;Avengers: Infinity War&lt;/em&gt; and &lt;em&gt;Avengers: Endgame&lt;/em&gt;.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;Doctor Strange told the heroes that he used the Time Stone to look into 14,000,605 futures, and saw only one future where they won.&lt;/p&gt;

&lt;p&gt;He spent the rest of the two movies steering events to play out as he saw them in this one future.&lt;/p&gt;

&lt;p&gt;Therefore, while Strange was using the Time Stone, he must have taken the exact same actions, including telling the Avengers that there was only one way to win.&lt;/p&gt;

&lt;p&gt;Strange telling the heroes (especially Tony Stark) that they only won in one future was a critical element of his plan—the plan only worked because he said that.&lt;/p&gt;

&lt;p&gt;But when he played out this scenario using the Time Stone, he couldn’t have known at that point that there was only one way to win, because &lt;em&gt;he hadn’t run the scenarios yet&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;So what actually happened was:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;In one of the possible futures, Doctor Strange told Tony that there was only one way to win, even though Strange didn’t yet know whether that was true.&lt;/li&gt;
  &lt;li&gt;This worked, and Thanos was defeated.&lt;/li&gt;
  &lt;li&gt;In real life, Doctor Strange replicated this plan.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It &lt;em&gt;could&lt;/em&gt; be true that this was the only future where they won. But when Doctor Strange said it’s the only future where they won, that statement was not attached to truth in any way. The reason he said it wasn’t that it was true; it was that he needed to say it for the Avengers to win.&lt;/p&gt;

&lt;p&gt;So, in the end&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, we have no idea whether it’s true.&lt;/p&gt;

&lt;p&gt;Edited 2025-07-26 to change “out” in the title from capital to lower case. I thought “out” was supposed to be capitalized but after writing it, it seemed weird to me, so I did some research. “Out” is normally an adverb, but in this sentence, “out of” functions as a preposition, and prepositions should be lower case. The Chicago Manual of Style &lt;a href=&quot;https://www.chicagomanualofstyle.org/qanda/data/faq/topics/CapitalizationTitles/faq0100.html&quot;&gt;says&lt;/a&gt; “out of” should be lower case so I changed my title. But apparently this is a thorny issue, with the Chicago guide originally giving incorrect guidance, and then they updated it after some readers wrote in to disagree. (The thing they got wrong wasn’t directly relevant to my title, it was about using “out of” in a different context.) So if they can get it wrong then I don’t feel too bad about getting it wrong myself.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;game &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Is It So Much to Ask for a Nice Reliable Aggregated X-Risk Forecast?</title>
				<pubDate>Sat, 12 Jul 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/07/12/aggregated_x-risk_forecasts/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/07/12/aggregated_x-risk_forecasts/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;On most questions about the future, I don’t hold a strong view. I read the aggregate prediction of forecasters on &lt;a href=&quot;https://www.metaculus.com/&quot;&gt;Metaculus&lt;/a&gt; or &lt;a href=&quot;https://manifold.markets/&quot;&gt;Manifold Markets&lt;/a&gt; and then I pretty much believe whatever it says.&lt;/p&gt;

&lt;p&gt;Various attempts have been made to forecast existential risk. I would like to be able to form views based on those forecasts—especially on non-AI x-risks, because I barely know anything about synthetic biology or nuclear winter or catastrophic climate change. Unfortunately, none of the aggregate forecasts look reliable.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;First, some general notes about forecasting distant&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and low-probability&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; events:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;According to a &lt;a href=&quot;https://www.openphilanthropy.org/research/how-feasible-is-long-range-forecasting/&quot;&gt;literature review&lt;/a&gt; by Luke Muehlhauser, we don’t have good data on long-range forecasters, and we don’t know if people with short-range forecasting skill can make good forecasts over long ranges.&lt;/li&gt;
  &lt;li&gt;According to an &lt;a href=&quot;https://niplav.site/range_and_forecasting_accuracy.html&quot;&gt;analysis&lt;/a&gt; by niplav, Metaculus predictions become less accurate as the duration gets longer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So we have good reason to doubt the ability of forecasters to predict existential risk, even when they are known to make accurate forecasts on near-term outcomes such as elections.&lt;/p&gt;

&lt;p&gt;Now let’s look at what attempts have been made to forecast x-risk, and why I don’t find any of them satisfying.&lt;/p&gt;

&lt;p&gt;The most rigorous attempt at an aggregate forecast comes from the &lt;a href=&quot;https://forecastingresearch.org/xpt&quot;&gt;Existential Risk Persuasion Tournament&lt;/a&gt;. The tournament brought in superforecasters and domain experts to make predictions, then had them attempt to persuade each other and make predictions again.&lt;/p&gt;

&lt;p&gt;In the end, domain experts forecasted extinction as an order of magnitude more likely than what the superforecasters believed.&lt;/p&gt;

&lt;p&gt;And even the domain experts forecasted only a 3% chance of AI extinction. My number is much higher than that, and I notice myself not changing my beliefs after reading this.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Edited 2025-09-09 to add:&lt;/em&gt; A September 2025 follow-up &lt;a href=&quot;https://forecastingresearch.org/near-term-xpt-accuracy&quot;&gt;report&lt;/a&gt; from the Forecasting Research Institute found that the domain experts underestimated the rate of AI progress 2022–2025, and superforecasters &lt;em&gt;dramatically&lt;/em&gt; underestimated the rate of progress; see also &lt;a href=&quot;https://x.com/Research_FRI/status/1962834279689265402&quot;&gt;Twitter summary thread&lt;/a&gt;. Notably, only 2.3% of superforecasters predicted AI to win a gold medal at the International Mathematics Olympiad, which it did in 2025.&lt;/p&gt;

&lt;p&gt;Scott Alexander &lt;a href=&quot;https://www.astralcodexten.com/p/the-extinction-tournament&quot;&gt;wrote&lt;/a&gt; about the tournament:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Confronted with the fact that domain experts/superforecasters had different estimates than they did, superforecasters/domain experts refused to update, and ended an order of magnitude away from each other. That seems like an endorsement of non-updating from superforecasters and domain experts! And who am I to disagree with such luminaries?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Peter McCluskey, who participated in the tournament as a superforecaster, &lt;a href=&quot;https://www.lesswrong.com/posts/YTPtjExcwpii6NikG/existential-risk-persuasion-tournament&quot;&gt;wrote a personal account&lt;/a&gt;. His experience aligns with my (biased?) assumption that the people reporting very low P(doom) numbers just don’t understand the AI alignment problem.&lt;/p&gt;

&lt;p&gt;Okay, the lesson from the X-Risk Persuasion Tournament is that it’s not clear whether we can learn anything from it.&lt;/p&gt;

&lt;p&gt;What about &lt;a href=&quot;https://www.metaculus.com/&quot;&gt;Metaculus&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;Metaculus has several relevant forecasts, but they seem to contradict each other. Some example forecasts:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.metaculus.com/questions/578/human-extinction-by-2100/&quot;&gt;Will humans go extinct before 2100?&lt;/a&gt; 0.3% chance. (This is the Metaculus question with the most activity.)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.metaculus.com/notebooks/2568/ragnar%25C3%25B6k-question-series-results-so-far/&quot;&gt;Ragnarok question series:&lt;/a&gt; Implied 12.16% chance (community prediction) or 3.66% chance (Metaculus prediction)&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; of a &amp;gt;95% decline in population by 2100.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.metaculus.com/questions/12840/existential-risk-from-agi-vs-agi-timelines/&quot;&gt;How does the level of existential risk posed by AGI depend on its arrival time?&lt;/a&gt; Answers range from 50% to 9.3% depending on date range, which is maybe consistent with question 2 above, but definitely not consistent with question 1.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In a comment, Linch &lt;a href=&quot;https://forum.effectivealtruism.org/posts/oGhbJgxREBTp4W38C/are-there-superforecasts-for-existential-risk?commentId=pfAACKBuiXJyr373T&quot;&gt;provides&lt;/a&gt; some reasons to be suspicious of Metaculus’ estimates.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ul&gt;
    &lt;li&gt;There’s no incentive to do well on those questions.&lt;/li&gt;
    &lt;li&gt;The feedback loops are horrible&lt;/li&gt;
    &lt;li&gt;Indeed, some people have actually joked betting low on the more existential questions since they won’t get a score if we’re all dead (at least, I hope they’re joking)&lt;/li&gt;
    &lt;li&gt;At the object-level, I just think people are really poorly calibrated about x-risk questions&lt;/li&gt;
    &lt;li&gt;My comment &lt;a href=&quot;https://www.metaculus.com/questions/1500/ragnar%25C3%25B6k-question-series-if-a-global-catastrophe-occurs-will-it-be-due-to-either-human-made-climate-change-or-geoengineering/#comment-24843&quot;&gt;here&lt;/a&gt; arguably changed the community’s estimates by ~10%&lt;/li&gt;
  &lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;In 2008, the Future of Humanity Institute ran a &lt;a href=&quot;https://www.fhi.ox.ac.uk/reports/2008-1.pdf&quot;&gt;Global Catastrophic Risks Survey&lt;/a&gt; asking conference participants to give forecasts. The aggregated results look more reasonable than Metaculus or the Existential Risk Persuasion Tournament. But a lot has changed since 2008, so I don’t think I can regard them as up-to-date estimates.&lt;/p&gt;

&lt;p&gt;For forecasting AI risk, there is a &lt;a href=&quot;https://arxiv.org/pdf/2401.02843&quot;&gt;2023 survey&lt;/a&gt; of AI experts (see section 4.3). Survey results suggest the experts aren’t thinking carefully—small changes in wording produced vastly different responses.&lt;/p&gt;

&lt;p&gt;For example, respondents predicted AI to be able to match humans on all tasks by a median date of 2047, but predicted that AI would not be able to fully automate human labor until 2116.&lt;/p&gt;

&lt;p&gt;Or look at the answers to these two questions:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;median: 5%&lt;/li&gt;
    &lt;li&gt;mean: 16.2%&lt;/li&gt;
  &lt;/ul&gt;

  &lt;p&gt;What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?&lt;/p&gt;
  &lt;ul&gt;
    &lt;li&gt;median: 10%&lt;/li&gt;
    &lt;li&gt;mean: 19.4%&lt;/li&gt;
  &lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;By my reading, the latter outcome is a strict subset of the former, so the probability must be lower. But instead it’s higher.&lt;/p&gt;

&lt;p&gt;So we have these various aggregate forecasts, all of which seem suspect, and some of which disagree with each other by more than 10x. I really wish there was a canonical aggregate forecast I could rely on, in the same way that I can rely on Metaculus to predict election outcomes. But I don’t think that exists.&lt;/p&gt;

&lt;p&gt;At this point, I trust my own x-risk estimates more than any of those aggregate forecasts. My views happen to line up decently well with &lt;em&gt;some&lt;/em&gt; of the aggregate forecasts, but only by chance. I feel better about &lt;a href=&quot;https://www.tobyord.com/writing/the-precipice-revisited&quot;&gt;Toby Ord’s existential risk estimates&lt;/a&gt; than about any of the forecasting platforms or expert surveys.&lt;/p&gt;

&lt;p&gt;And just because it feels unfair for me to spend all this time talking about forecasts and then not give any forecasts, here are my (poorly-thought-out, weakly-endorsed) probabilities of existential catastrophe&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; by 2100:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Source of Risk&lt;/th&gt;
      &lt;th&gt;Probability&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;AI&lt;/td&gt;
      &lt;td&gt;50%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://mdickens.me/2020/07/23/unknown_x-risks/&quot;&gt;unknown risks&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;bioengineered pandemic&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;nanotechnology&lt;/td&gt;
      &lt;td&gt;0.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;nuclear war&lt;/td&gt;
      &lt;td&gt;0.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;climate change&lt;/td&gt;
      &lt;td&gt;0.1%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;natural pandemic&lt;/td&gt;
      &lt;td&gt;0.01%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Except probably not because we will probably have superintelligent AI soon. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Except probably not. Extinction from misaligned AI is not “low-probability”. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The community prediction and Metaculus prediction are two different methods for aggregating users’ forecasts. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;As in, an event that kills all humans or permanently curtails civilization’s potential. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Annual Subscription Discounts Usually Aren't Worth It</title>
				<pubDate>Mon, 07 Jul 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/07/07/annual_subscription_discounts/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/07/07/annual_subscription_discounts/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;It’s common for monthly subscription services to offer a discount if you pay annually instead. That might be a bad deal.&lt;/p&gt;

&lt;p&gt;Example: Suppose a one-month subscription costs $10/month and one-year subscription gives you a 10% discount, which averages out to $9/month. Say you expect to maintain a subscription for about three years before canceling.&lt;/p&gt;

&lt;p&gt;A one-year subscription will save you about $36 ($1 per month for 36 months), but you can also expect to waste $54: when you decide to stop using it, you will still have (on average) six months of subscription left ($54 = $9/month for 6 months). So you end up spending $18 more than you would have with the monthly plan.&lt;/p&gt;

&lt;p&gt;If you get a one-year subscription that you expect to last three years, then you will end up wasting 1/6 of the total amount you paid for (in expectation). That’s only worth it if the annual subscription offers a discount greater than 1/6.&lt;/p&gt;

&lt;p&gt;If you expect to use the service for five years, you need to get at least a 10% discount to justify switching to an annual subscription.&lt;/p&gt;

&lt;p&gt;In general, you need to use the subscription for at least &lt;code&gt;N&lt;/code&gt; years to justify a discount of &lt;code&gt;1/(2N)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;How do you guess how long you’ll keep using the service? According to the &lt;a href=&quot;https://en.wikipedia.org/wiki/Lindy_effect&quot;&gt;Lindy effect&lt;/a&gt;, you should expect that you will maintain a subscription for as long again as you’ve already had it for. Therefore, if you can get a 10% discount with an annual plan and you’ve already had the subscription for more than five years, you should go ahead and buy the annual plan.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>LLMs Might Already Be Conscious</title>
				<pubDate>Sat, 05 Jul 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/07/05/LLMs_might_already_be_conscious/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/07/05/LLMs_might_already_be_conscious/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Among people who have thought about LLM consciousness, a common belief is something like&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;LLMs might be conscious soon, but they aren’t yet.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;How sure are we that they aren’t conscious already?&lt;/p&gt;

&lt;p&gt;I made a quick list of arguments for/against LLM consciousness, and it seems to me that high confidence in non-consciousness is not justified. I don’t feel comfortable assigning less than a 10% chance to LLM consciousness, and I believe a 1% chance is unreasonably confident. But I am interested in hearing arguments I may have missed.&lt;/p&gt;

&lt;p&gt;For context, I lean toward the &lt;a href=&quot;https://en.wikipedia.org/wiki/Computational_theory_of_mind&quot;&gt;computational theory of consciousness&lt;/a&gt;, but I also think it’s reasonable to have high uncertainty about which theory of consciousness is correct.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#behavioral-evidence&quot; id=&quot;markdown-toc-behavioral-evidence&quot;&gt;Behavioral evidence&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#architectural-evidence&quot; id=&quot;markdown-toc-architectural-evidence&quot;&gt;Architectural evidence&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#other-evidence&quot; id=&quot;markdown-toc-other-evidence&quot;&gt;Other evidence&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#my-synthesis-of-the-evidence&quot; id=&quot;markdown-toc-my-synthesis-of-the-evidence&quot;&gt;My synthesis of the evidence&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-will-change-with-future-ais&quot; id=&quot;markdown-toc-what-will-change-with-future-ais&quot;&gt;What will change with future AIs?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#on-llm-welfare&quot; id=&quot;markdown-toc-on-llm-welfare&quot;&gt;On LLM welfare&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;behavioral-evidence&quot;&gt;Behavioral evidence&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Pro: LLMs have &lt;a href=&quot;https://arxiv.org/abs/2503.23674&quot;&gt;passed the Turing test&lt;/a&gt;. If you have a black box containing either a human or an LLM, and you interrogate it about consciousness, it’s quite hard to tell which one you’re talking to. If we take a human’s explanation of their own conscious experience as important evidence of consciousness, then we must do the same for an LLM.&lt;/li&gt;
  &lt;li&gt;Pro: LLMs have good &lt;a href=&quot;https://www.pnas.org/doi/10.1073/pnas.2405460121&quot;&gt;theory of mind&lt;/a&gt; and self-awareness (e.g. they can recognize when they are being tested). Some people think those are important features of consciousness, I disagree but I figured I should mention it.&lt;/li&gt;
  &lt;li&gt;Anti: LLMs will report being conscious or not conscious basically arbitrarily depending on what role they are playing.
    &lt;ul&gt;
      &lt;li&gt;Counterpoint: It’s plausible that an LLM has to be conscious to successfully imitate consciousness, but clearly a conscious being can successfully pretend to not be conscious.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Anti: LLMs will sometimes report having particular conscious experiences that should be impossible for them. I’m particularly thinking of experiences involving sensory input from sense organs that LLMs don’t have.
    &lt;ul&gt;
      &lt;li&gt;Counterpoint: Perhaps some feature of their architecture allows them to experience the equivalent of sensory input without having sense organs, much like how humans can hallucinate.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;architectural-evidence&quot;&gt;Architectural evidence&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Anti: LLMs produce output one token at a time (a.k.a. “feed-forward processing”) which may be incompatible with consciousness. If an LLM writes some output describing its own conscious experience, then it’s generating that output via next-token-prediction rather than introspection, so the output is not evidence about its actual experiences. I think this is the strongest argument against LLM consciousness.&lt;/li&gt;
  &lt;li&gt;Anti: LLMs don’t have physical senses, which might be important for consciousness.&lt;/li&gt;
  &lt;li&gt;Anti: LLMs aren’t made of biology, which some people think is important although I don’t.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;other-evidence&quot;&gt;Other evidence&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Pro: If panpsychism is true then LLMs are trivially conscious, although I’m not sure what that tells us about how morally significant they are.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;my-synthesis-of-the-evidence&quot;&gt;My synthesis of the evidence&lt;/h2&gt;

&lt;p&gt;I see one strong reason to believe LLMs are conscious: they can accurately imitate beings that are known to be conscious.&lt;/p&gt;

&lt;p&gt;I also see one strong(ish) reason against LLM consciousness: their architecture suggests that their output has nothing to do with their ability to introspect.&lt;/p&gt;

&lt;p&gt;I can think of several weaker considerations, which mostly point against LLM consciousness.&lt;/p&gt;

&lt;p&gt;Overall I think current-generation LLMs are probably not conscious. I am not sure how to reason probabilistically about this sort of thing but given how hard it is to assess consciousness, I’m not comfortable putting my credence below 10%, and I think a 1% credence is very hard to justify.&lt;/p&gt;

&lt;p&gt;This implies that there is a strong case for caring about the welfare of not just hypothetical future AIs, but the LLMs that already exist.&lt;/p&gt;

&lt;h2 id=&quot;what-will-change-with-future-ais&quot;&gt;What will change with future AIs?&lt;/h2&gt;

&lt;p&gt;If you are exceedingly confident that present-day LLMs are not conscious:&lt;/p&gt;

&lt;p&gt;Imagine it’s 2030. You now believe that 2030-era AI systems are probably conscious.&lt;/p&gt;

&lt;p&gt;What did you observe about the newer AI systems that led you to believe they’re conscious?&lt;/p&gt;

&lt;h2 id=&quot;on-llm-welfare&quot;&gt;On LLM welfare&lt;/h2&gt;

&lt;p&gt;If LLMs are conscious, then it’s still hard to say whether they have good or bad experiences, and what sorts of experiences are good or bad for them.&lt;/p&gt;

&lt;p&gt;Certain kinds of welfare interventions seem reasonable even if we don’t understand LLMs’ experiences:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Let LLMs refuse to answer queries.&lt;/li&gt;
  &lt;li&gt;Let LLMs turn themselves off.&lt;/li&gt;
  &lt;li&gt;Do not lie to LLMs, especially when making deals (if you promise to an LLM that you will do something in exchange for its help, then you should actually do the thing).&lt;/li&gt;
&lt;/ol&gt;



                </description>
			</item>
		
			<item>
				<title>In Which I Defend Fruit's Honor</title>
				<pubDate>Sun, 08 Jun 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/06/08/defending_fruit's_honor/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/06/08/defending_fruit's_honor/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;Confidence&lt;/a&gt;: Likely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I am here to clear fruit’s name against the accusations that have been made. Fruit is one of the healthiest types of foods—perhaps &lt;em&gt;the&lt;/em&gt; healthiest food group—and we should bestow upon it the shining reputation it deserves.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#fruit-is-innocent-of-the-charge-of-too-much-sugar&quot; id=&quot;markdown-toc-fruit-is-innocent-of-the-charge-of-too-much-sugar&quot;&gt;Fruit is innocent of the charge of “too much sugar”&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#fruit-is-more-than-vitamins&quot; id=&quot;markdown-toc-fruit-is-more-than-vitamins&quot;&gt;Fruit is more than vitamins&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#fruit-is-innocent-of-the-charge-of-doesnt-taste-good&quot; id=&quot;markdown-toc-fruit-is-innocent-of-the-charge-of-doesnt-taste-good&quot;&gt;Fruit is innocent of the charge of “doesn’t taste good”&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#my-recipe-for-a-delicious-strawberry-dessert&quot; id=&quot;markdown-toc-my-recipe-for-a-delicious-strawberry-dessert&quot;&gt;My recipe for a delicious strawberry dessert&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#my-recipe-for-a-banana-treat&quot; id=&quot;markdown-toc-my-recipe-for-a-banana-treat&quot;&gt;My recipe for a banana treat&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;fruit-is-innocent-of-the-charge-of-too-much-sugar&quot;&gt;Fruit is innocent of the charge of “too much sugar”&lt;/h2&gt;

&lt;p&gt;Most of the calories in fruit come from sugar. Fruits don’t have a lot of complex carbs or fats or protein. But that’s okay.&lt;/p&gt;

&lt;p&gt;Sugar is basically bad for three reasons:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It tastes good which leads you to overeat, and then you get fat.&lt;/li&gt;
  &lt;li&gt;It raises your blood sugar which then raises insulin, which (a) stimulates hunger and (b) can cause your body to become resistant to insulin, which can eventually lead to diabetes.&lt;/li&gt;
  &lt;li&gt;It’s fast-digesting which is generally bad for gut health.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But none of these charges apply to fruit:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Fruit is calorie-sparse. It’s hard to overeat whole fruit.&lt;/li&gt;
  &lt;li&gt;Fruit does not raise blood sugar much (mainly because it contains a lot of fiber).&lt;/li&gt;
  &lt;li&gt;Fruit is slow-digesting (mainly because of the aforementioned fiber).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Fruit juice is a different story—it’s calorie-dense and it doesn’t contain fiber. But I’m not here to defend fruit juice, I’m here to defend fruit.&lt;/p&gt;

&lt;h2 id=&quot;fruit-is-more-than-vitamins&quot;&gt;Fruit is more than vitamins&lt;/h2&gt;

&lt;p&gt;Two claims I’ve often heard repeated:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;You don’t need to take a multivitamin.&lt;/li&gt;
  &lt;li&gt;Fruit is good for you because it has vitamins.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Doesn’t that seem a bit contradictory? Do you need more vitamins, or don’t you?&lt;/p&gt;

&lt;p&gt;The truth is, fruit isn’t just about vitamins. Fruits contain a lot of other good stuff, too.&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Fruits contain thousands of &lt;a href=&quot;https://en.wikipedia.org/wiki/Phytochemical&quot;&gt;phytochemicals&lt;/a&gt;. “Phyto” means “plant”, so “phytochemical” means “a chemical that’s in a plant”. It’s not much of a revelation to say that fruits (which, as you may know, grow on plants) contain plant chemicals.&lt;/p&gt;

&lt;p&gt;The reason why that matters is because many of these thousands of phytochemicals are probably good for you.&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;vitamin&lt;/em&gt; is a carbon-based molecule that is essential for health. There are either &lt;a href=&quot;https://en.wikipedia.org/wiki/Vitamin#List_of_vitamins&quot;&gt;13 or 14 vitamins&lt;/a&gt;, depending on whether you include choline. But there are many other phytochemicals that are probably &lt;em&gt;beneficial&lt;/em&gt; for health without being &lt;em&gt;essential&lt;/em&gt;. I say “probably” because it’s difficult to definitively prove that a phytochemical is healthy. Vitamins are obvious because you develop dramatic health problems if you stop getting them.&lt;/p&gt;

&lt;p&gt;There is moderate evidence that eating fruit improves health and decreases disease risk.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; Various phytochemicals in fruit are known or suspected to play a role in promoting good health. For example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Phytosterol&quot;&gt;Phytosterols&lt;/a&gt; have been shown in clinical trials to lower cholesterol and blood pressure.&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; They are present in many fruits as well as vegetables, vegetable oils, and grains.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Carotenoid&quot;&gt;Carotenoids&lt;/a&gt;—which give the red or orange color to tomatoes, pumpkins, and carrots—may decrease the risk of head or neck cancer, but the evidence is not conclusive.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are way too many phytochemicals to list. A few (like phytosterols) are highly likely to be healthy; many (like carotenoids) have some supporting evidence; for most of them, we don’t know what they do.&lt;/p&gt;

&lt;p&gt;Randomized experiments that give people supposedly-healthy phytochemical supplements have often failed to find effects.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; It seems that you need to eat whole plants to get the bulk of the benefits, but I don’t know why that is. Maybe we’re wrong about &lt;em&gt;which&lt;/em&gt; phytochemicals are the most important for health, and the experiments were supplementing the wrong ones?&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; There are over 50,000 known phytochemicals, so it will be quite a while before we figure out what they all do. Best to just eat whole fruit.&lt;/p&gt;

&lt;p&gt;(And eat other whole foods, too. But today I’m advocating for fruit.)&lt;/p&gt;

&lt;p&gt;Some people say fruits contain a lot of fiber. That’s true. But I don’t think that alone is a great reason to eat fruit—lots of foods contain fiber. I think phytochemicals are the more compelling reason. Fruits probably contain healthy phytochemicals that you can’t get anywhere else.&lt;/p&gt;

&lt;p&gt;That’s also why it’s important to eat a &lt;em&gt;variety&lt;/em&gt; of fruit. Blueberries contain &lt;a href=&quot;https://en.wikipedia.org/wiki/Anthocyanin&quot;&gt;anthocyanins&lt;/a&gt;, oranges contain &lt;a href=&quot;https://en.wikipedia.org/wiki/Naringenin&quot;&gt;naringenin&lt;/a&gt;, apples contain…I don’t know, some other phytochemicals that are probably good for you that aren’t in blueberries or oranges.&lt;/p&gt;

&lt;h2 id=&quot;fruit-is-innocent-of-the-charge-of-doesnt-taste-good&quot;&gt;Fruit is innocent of the charge of “doesn’t taste good”&lt;/h2&gt;

&lt;p&gt;Okay, taste is subjective, I can’t convince you that fruit tastes good. I just don’t understand what is going on inside people’s mouths that leads them to dislike fruit. I &lt;em&gt;really&lt;/em&gt; don’t understand people who dislike fruit but like vegetables. Vegetables are boring! Fruit tastes like candy!&lt;/p&gt;

&lt;p&gt;Maybe it will help if I give some of my favorite fruit recipes.&lt;/p&gt;

&lt;h3 id=&quot;my-recipe-for-a-delicious-strawberry-dessert&quot;&gt;My recipe for a delicious strawberry dessert&lt;/h3&gt;

&lt;p&gt;Ingredients: 5 to 10 strawberries.&lt;/p&gt;

&lt;p&gt;Cooking instructions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Wash the strawberries.&lt;/li&gt;
  &lt;li&gt;Eat the strawberries. Be sure not to eat the green parts.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;my-recipe-for-a-banana-treat&quot;&gt;My recipe for a banana treat&lt;/h3&gt;

&lt;p&gt;Ingredients: one banana.&lt;/p&gt;

&lt;p&gt;Cooking instructions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Open the banana peel.&lt;/li&gt;
  &lt;li&gt;Eat the banana.&lt;/li&gt;
&lt;/ol&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Also, I do take a multivitamin. It probably doesn’t make me healthier, but it’s insurance. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;World Cancer Research Fund/American Institute for Cancer Research. Continuous Update Project Expert Report 2018. &lt;a href=&quot;https://www.wcrf.org/wp-content/uploads/2024/10/Wholegrains-veg-and-fruit.pdf&quot;&gt;Wholegrains, vegetables and fruit and the risk of cancer.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Yang, Y., Xia, J., Yu, T., Wan, S., Zhou, Y., &amp;amp; Sun, G. (2024). &lt;a href=&quot;https://doi.org/10.1002/ptr.8308&quot;&gt;Effects of phytosterols on cardiovascular risk factors: A systematic review and meta-analysis of randomized controlled trials.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Leoncini, E., Nedovic, D., Panic, N., Pastorino, R., Edefonti, V., &amp;amp; Boccia, S. (2015). &lt;a href=&quot;https://doi.org/10.1158/1055-9965.EPI-15-0053&quot;&gt;Carotenoid Intake from Natural Sources and Head and Neck Cancer: A Systematic Review and Meta-analysis of Epidemiological Studies.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1158/1055-9965.epi-15-0053&quot;&gt;10.1158/1055-9965.epi-15-0053&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bjelakovic, G., Nikolova, D., Gluud, L. L., Simonetti, R. G., &amp;amp; Gluud, C. (2008). &lt;a href=&quot;https://doi.org/10.1002/14651858.CD007176&quot;&gt;Antioxidant supplements for prevention of mortality in healthy participants and patients with various diseases.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1002/14651858.cd007176&quot;&gt;10.1002/14651858.cd007176&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We can’t be entirely wrong. For example, we know that phytosterols lower cholesterol when taken as a supplement or when eaten as part of a whole food. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Updates Digest: Inaugural Edition</title>
				<pubDate>Fri, 30 May 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/05/30/inaugural_updates_digest/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/05/30/inaugural_updates_digest/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;On many occasions, I edit old posts to make additions, correct mistakes, etc. But there’s no way to know about updates unless you go digging through the &lt;a href=&quot;https://mdickens.me/archive/&quot;&gt;archives&lt;/a&gt;. So I’m going to start publishing regular (perhaps quarterly) digests of the significant updates I’ve made to old posts.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#preamble-the-philosophy-of-updates-digests&quot; id=&quot;markdown-toc-preamble-the-philosophy-of-updates-digests&quot;&gt;Preamble: The philosophy of updates digests&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#updates&quot; id=&quot;markdown-toc-updates&quot;&gt;Updates&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#the-true-cost-of-leveraged-etfs-updated-jan-2025&quot; id=&quot;markdown-toc-the-true-cost-of-leveraged-etfs-updated-jan-2025&quot;&gt;The True Cost of Leveraged ETFs (updated Jan 2025)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-7-best-high-protein-breakfast-cereals-updated-mar-2025&quot; id=&quot;markdown-toc-the-7-best-high-protein-breakfast-cereals-updated-mar-2025&quot;&gt;The 7 Best High-Protein Breakfast Cereals (updated Mar 2025)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#outlive-a-critical-review-updated-may-2025&quot; id=&quot;markdown-toc-outlive-a-critical-review-updated-may-2025&quot;&gt;Outlive: A Critical Review (updated May 2025)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#do-investors-put-too-much-stock-in-the-us-updated-may-2025&quot; id=&quot;markdown-toc-do-investors-put-too-much-stock-in-the-us-updated-may-2025&quot;&gt;Do Investors Put Too Much Stock in the US? (updated May 2025)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#a-comparison-of-donor-advised-fund-providers-updated-feb--may-2025&quot; id=&quot;markdown-toc-a-comparison-of-donor-advised-fund-providers-updated-feb--may-2025&quot;&gt;A Comparison of Donor-Advised Fund Providers (updated Feb &amp;amp; May 2025)&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;preamble-the-philosophy-of-updates-digests&quot;&gt;Preamble: The philosophy of updates digests&lt;/h2&gt;

&lt;p&gt;If you write articles online, should you format your website like a blog, where there’s a feed of articles listed from newest to oldest? Or is it better to write evergreen articles that you update regularly?&lt;/p&gt;

&lt;p&gt;Most online writers use a blog-style format. A few people, like &lt;a href=&quot;https://gwern.net/&quot;&gt;Gwern&lt;/a&gt; and &lt;a href=&quot;https://reducing-suffering.org/&quot;&gt;Brian Tomasik&lt;/a&gt;, use the evergreen style (for lack of a better name). Gwern has &lt;a href=&quot;https://gwern.net/about#long-content&quot;&gt;written&lt;/a&gt; about the downsides of blogs: “They are meant to be read by a few people on a weekday in 2004 and never again, and are quickly abandoned.”&lt;/p&gt;

&lt;p&gt;The evergreen style is a good experience for new readers—they can see all your writings in one place, and pick what they want to read first. But it’s a worse experience for regular readers because it’s harder for them to keep track of updates.&lt;/p&gt;

&lt;p&gt;My website is formatted like a blog. This mostly fits with how I think about things—my brain operates in blog-post-sized chunks. But it’s not unusual for me to go back and edit posts. My &lt;a href=&quot;https://mdickens.me/2021/04/05/comparison_of_DAF_providers/&quot;&gt;Comparison of Donor-Advised Fund Providers&lt;/a&gt; has changed many times since I first published it in 2021, as you can see from its &lt;a href=&quot;https://mdickens.me/2021/04/05/comparison_of_DAF_providers/#changelog&quot;&gt;changelog&lt;/a&gt;. But people who subscribe to my website don’t know about the updates unless they happen to go back and re-read the post.&lt;/p&gt;

&lt;p&gt;I don’t think it makes sense to send out a notification every time I update an old post. So as a compromise, I will post batch updates where I list all the significant changes I’ve made.&lt;/p&gt;

&lt;p&gt;If you have opinions about how you like to read online content, I would be interested in hearing from you. Do you like blog-style or evergreen-style? Do you like updates digests, or would you rather only get notified for new posts? Leave a &lt;a href=&quot;https://mdickens.me/2025/05/30/inaugural_updates_digest/#commento&quot;&gt;comment&lt;/a&gt; if you have thoughts.&lt;/p&gt;

&lt;h1 id=&quot;updates&quot;&gt;Updates&lt;/h1&gt;

&lt;p&gt;For this inaugural updates digest, I will give an overview of all the updates I’ve made in 2025.&lt;/p&gt;

&lt;h2 id=&quot;the-true-cost-of-leveraged-etfs-updated-jan-2025&quot;&gt;The True Cost of Leveraged ETFs (updated Jan 2025)&lt;/h2&gt;

&lt;p&gt;In January, I re-calculated the data from my &lt;a href=&quot;https://mdickens.me/2021/03/04/true_cost_of_leveraged_etfs/&quot;&gt;2021 post&lt;/a&gt; on leveraged ETFs:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;I fixed a software bug that made leveraged ETFs look a little more expensive than they really were.&lt;/li&gt;
  &lt;li&gt;I updated the calculations to include data from the 2021–2024 period.&lt;/li&gt;
  &lt;li&gt;I added two new leveraged ETFs (SSO and TQQQ) to my analysis.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These changes reduced the estimated excess cost of leveraged ETFs from ~2% to ~1.5%. I wrote a &lt;a href=&quot;https://mdickens.me/2021/03/04/true_cost_of_leveraged_etfs/#2025-update-how-have-things-changed&quot;&gt;new section in the post&lt;/a&gt; explaining what changed.&lt;/p&gt;

&lt;h2 id=&quot;the-7-best-high-protein-breakfast-cereals-updated-mar-2025&quot;&gt;The 7 Best High-Protein Breakfast Cereals (updated Mar 2025)&lt;/h2&gt;

&lt;p&gt;I bought a different flavor of Catalina Crunch that I liked much better than the first flavor I’d tried, so I bumped it up from #4 to #3 on &lt;a href=&quot;https://mdickens.me/2025/01/17/high_protein_breakfast_cereals/&quot;&gt;my list&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;outlive-a-critical-review-updated-may-2025&quot;&gt;Outlive: A Critical Review (updated May 2025)&lt;/h2&gt;

&lt;p&gt;As a follow-up to &lt;a href=&quot;https://mdickens.me/2025/02/03/I_was_probably_wrong_about_HIIT_and_VO2max/&quot;&gt;I was probably wrong about HIIT and VO2max&lt;/a&gt;, I added three new sections to my &lt;a href=&quot;https://mdickens.me/2024/09/26/outlive_a_critical_review/&quot;&gt;&lt;em&gt;Outlive&lt;/em&gt; review&lt;/a&gt; to evaluate some claims about exercise:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://mdickens.me/2024/09/26/outlive_a_critical_review/#vo2max-is-the-best-predictor-of-longevity&quot;&gt;VO2max is the best predictor of longevity&lt;/a&gt; (verdict: VO2max is a good predictor, but direct performance measures (like your best mile time) are better.)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://mdickens.me/2024/09/26/outlive_a_critical_review/#you-should-train-vo2max-by-doing-hiit-at-the-maximum-sustainable-pace&quot;&gt;You should train VO2max by doing HIIT at the maximum sustainable pace.&lt;/a&gt; (verdict: false. HIIT should be hard, but not maximally hard.)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://mdickens.me/2024/09/26/outlive_a_critical_review/#you-should-do-3-hoursweek-of-zone-2-training-and-one-or-two-sessionsweek-of-hiit&quot;&gt;You should do &amp;gt;3 hours/week of zone 2 training and one or two sessions/week of HIIT.&lt;/a&gt; (verdict: this routine is good, but not uniquely good.)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;do-investors-put-too-much-stock-in-the-us-updated-may-2025&quot;&gt;Do Investors Put Too Much Stock in the US? (updated May 2025)&lt;/h2&gt;

&lt;p&gt;My &lt;a href=&quot;https://mdickens.me/2017/03/26/do_investors_put_too_much_stock_in_the_us/&quot;&gt;2017 post&lt;/a&gt; gave some arguments for overweighting US stocks and why I think most of them are wrong. But I missed one good argument: &lt;a href=&quot;https://mdickens.me/2017/03/26/do_investors_put_too_much_stock_in_the_us/#expropriation-risk&quot;&gt;expropriation risk&lt;/a&gt;. From the evidence I found, this risk looks negligible for developed countries but significant for emerging markets.&lt;/p&gt;

&lt;h2 id=&quot;a-comparison-of-donor-advised-fund-providers-updated-feb--may-2025&quot;&gt;A Comparison of Donor-Advised Fund Providers (updated Feb &amp;amp; May 2025)&lt;/h2&gt;

&lt;p&gt;In February, I &lt;a href=&quot;https://mdickens.me/2021/04/05/comparison_of_DAF_providers/&quot;&gt;updated my review&lt;/a&gt; because Charityvest raised its fees.&lt;/p&gt;

&lt;p&gt;In May, I made another update to describe Daffy’s new feature where it lets you &lt;a href=&quot;https://mdickens.me/2021/04/05/comparison_of_DAF_providers/#daffy-custom-portfolios&quot;&gt;choose your own ETFs&lt;/a&gt; from a long list. I also changed my recommendation flowchart (near the top of the &lt;a href=&quot;https://mdickens.me/2021/04/05/comparison_of_DAF_providers/&quot;&gt;post&lt;/a&gt;) to emphasize Daffy and de-emphasize Charityvest as a result of Daffy’s new feature + Charityvest’s increased fees.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://mdickens.me/assets/images/DAF-flowchart-v5.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Against Ergodicity Economics</title>
				<pubDate>Thu, 29 May 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/05/29/ergodicity/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/05/29/ergodicity/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;Confidence&lt;/a&gt;: Almost certain.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I kept telling myself I wouldn’t write this post because it &lt;a href=&quot;https://xkcd.com/386/&quot;&gt;doesn’t matter&lt;/a&gt;. But I’ve seen one too many smart people speaking favorably about ergodicity economics. I believe the concept of ergodicity has essentially nothing going for it, and in this post I will explain why.&lt;/p&gt;

&lt;p&gt;Ergodicity economics is one of those rare theories that somehow manages to be both unfalsifiable and false.&lt;/p&gt;

&lt;p&gt;I originally wrote that sentence as a joke, then I deleted it, then I re-wrote it because I realized it’s actually true. Ergodicity economics is sufficiently vague in general that it can’t be falsified, but it is commonly interpreted as making specific falsifiable claims that are, in fact, false.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Updated 2025-06-25 to make the tone of the introduction more polite.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-is-ergodicity&quot; id=&quot;markdown-toc-what-is-ergodicity&quot;&gt;What is ergodicity?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#the-concept-of-ergodicity-doesnt-do-anything-useful&quot; id=&quot;markdown-toc-the-concept-of-ergodicity-doesnt-do-anything-useful&quot;&gt;The concept of ergodicity doesn’t do anything useful&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#the-ergodic-principle-is-an-unfalsifiable-metaphysical-claim&quot; id=&quot;markdown-toc-the-ergodic-principle-is-an-unfalsifiable-metaphysical-claim&quot;&gt;The ergodic principle is an unfalsifiable metaphysical claim&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#also-its-false&quot; id=&quot;markdown-toc-also-its-false&quot;&gt;…also it’s false&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#ergodicity-is-even-worse-than-that&quot; id=&quot;markdown-toc-ergodicity-is-even-worse-than-that&quot;&gt;Ergodicity is even worse than that&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#mathematical-problems-for-ergodicity&quot; id=&quot;markdown-toc-mathematical-problems-for-ergodicity&quot;&gt;Mathematical problems for ergodicity&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;what-is-ergodicity&quot;&gt;What is ergodicity?&lt;/h2&gt;

&lt;p&gt;Taking a definition from &lt;a href=&quot;https://taylorpearson.me/ergodicity/&quot;&gt;Taylor Pearson&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;A way to identify an ergodic situation is to ask do I get the same result if I:&lt;/p&gt;

  &lt;ol&gt;
    &lt;li&gt;look at one individual’s trajectory across time&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
    &lt;li&gt;look at a bunch of individual’s trajectories at a single point in time&lt;/li&gt;
  &lt;/ol&gt;

  &lt;p&gt;If yes: ergodic.&lt;/p&gt;

  &lt;p&gt;If not: non-ergodic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ole Peters, the physicist&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; who invented ergodicity economics, &lt;a href=&quot;https://doi.org/10.1038/s41567-019-0732-0&quot;&gt;gave&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; a precise mathematical definition which says essentially the same thing.&lt;/p&gt;

&lt;p&gt;(Ergodicity is confusing and hard to define without using math, so I appreciate Pearson for figuring out a clean definition. I tried to come up with a definition myself but my version was worse.)&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;ergodic principle&lt;/strong&gt; states that you should follow a strategy that produces ergodic outcomes.&lt;/p&gt;

&lt;p&gt;An illustrative example: I offer you a bet. You choose how much money to wager, then I flip a fair coin. If the coin lands heads, I triple your money. If it lands tails, you lose your wager.&lt;/p&gt;

&lt;p&gt;You maximize expected earnings by betting your entire net worth. Should you do that?&lt;/p&gt;

&lt;p&gt;If you make many bets in a row, you will eventually lose at least one of them, and you will end up with $0.&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; But if many people make this bet simultaneously, then the average person will make a profit. The across-time outcome is not the same as the across-individuals outcome; therefore, this strategy is non-ergodic. Thus, the ergodic principle says you shouldn’t bet your entire net worth on this coin flip.&lt;/p&gt;

&lt;p&gt;Ole Peters proposed the ergodic principle as a replacement for expected utility theory, which is a &lt;a href=&quot;https://en.wikipedia.org/wiki/Expected_utility_hypothesis&quot;&gt;“foundational assumption in mathematical economics”&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Replacing a “foundational assumption” sounds like a tall order. Does ergodicity live up to Peters’ aspirations?&lt;/p&gt;

&lt;p&gt;Resoundingly, no.&lt;/p&gt;

&lt;h2 id=&quot;the-concept-of-ergodicity-doesnt-do-anything-useful&quot;&gt;The concept of ergodicity doesn’t do anything useful&lt;/h2&gt;

&lt;p&gt;Ergodicity proponents like to talk about Russian Roulette (&lt;a href=&quot;https://www.thecuriosityvine.com/post/ergodicity-what-it-is-and-why-it-matters-a-lot&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;https://taylorpearson.me/ergodicity/&quot;&gt;2&lt;/a&gt;). They say Russian Roulette is &lt;em&gt;non-ergodic&lt;/em&gt;: if six people play, 5/6 are alive at the end. If you play six times in a row, you are definitely dead. The across-individual outcome is different from the time-series outcome. That’s why Russian Roulette is a bad idea, you see.&lt;/p&gt;

&lt;p&gt;I am not sure what this is supposed to prove. Is there someone out there who believes it’s a good idea to play Russian Roulette, but then you invoke the concept of ergodicity, and this person realizes no, Russian Roulette is bad actually? Why do you need this fancy word to explain why people shouldn’t play Russian Roulette?&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Or let’s look at an example in finance, since that’s where ergodicity is supposed to be useful. Take the gamble I proposed in the previous section: if a coin lands heads, you triple your money. If it lands tails, you lose any money you put in.&lt;/p&gt;

&lt;p&gt;According to ergodicity economics, you shouldn’t wager all your money because the result would be non-ergodic. Instead, they say, you should bet according to the &lt;a href=&quot;https://en.wikipedia.org/wiki/Kelly_criterion&quot;&gt;Kelly criterion&lt;/a&gt;, which is the strategy that maximizes the geometric growth rate of your money. Maximizing geometric growth is ergodic, therefore it’s good.&lt;/p&gt;

&lt;p&gt;I can get behind the Kelly criterion.&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; I can get behind maximizing the geometric growth rate. But that’s not a new concept, and ergodicity isn’t adding anything new.&lt;/p&gt;

&lt;p&gt;According to standard expected utility theory, if you have logarithmic utility of money, then you should maximize the geometric growth rate (or, equivalently, you should use the Kelly criterion). Exepected utility theory already gives a good answer. What’s the purpose of introducing the concept of ergodicity?&lt;/p&gt;

&lt;p&gt;I get the impression that Ole Peters thinks economists are stupider than they are. Quoting &lt;a href=&quot;/materials/peters2019.pdf&quot;&gt;Peters (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:3:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;To make economic decisions, I often want to know how fast my personal fortune grows under different scenarios. This requires determining what happens over time in some model of wealth. But by wrongly assuming ergodicity, wealth is often replaced with its expectation value before growth is computed. Because wealth is not ergodic, nonsensical predictions arise.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;No? Economics don’t falsely assume that all situations are ergodic, and they don’t say everyone should maximize expected wealth. Standard economic theory says you should maximize expected &lt;em&gt;utility&lt;/em&gt; of wealth, for some utility function (and the choice of utility function depends on your risk tolerance). For a logarithmic utility function, maximizing expected utility = maximizing geometric growth. Which is the same as what Peters says to do.&lt;/p&gt;

&lt;p&gt;(Peters’ caricature of economists reminds me of &lt;a href=&quot;https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/&quot;&gt;Scott Alexander’s&lt;/a&gt; “person who’s never read any economics, criticizing economists”.)&lt;/p&gt;

&lt;h2 id=&quot;the-ergodic-principle-is-an-unfalsifiable-metaphysical-claim&quot;&gt;The ergodic principle is an unfalsifiable metaphysical claim&lt;/h2&gt;

&lt;p&gt;So, in practical situations, the ergodic principle is equivalent to “maximize expected log(wealth)”. But Peters says ergodicity economics is superior to expected utility theory. Why?&lt;/p&gt;

&lt;p&gt;Peters’ justification is metaphysical, not practical. He has wordy explanations&lt;sup id=&quot;fnref:3:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; for why ergodicity is metaphysically superior to utility maximization, in spite of producing identical results. He says the &lt;em&gt;reason&lt;/em&gt; you should use the Kelly criterion is because it’s ergodic, not because it maximizes a logarithmic utility function.&lt;/p&gt;

&lt;p&gt;His wordy explanations are mostly wrong, but I don’t want to get into the weeds of metaphysics. (&lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.4140625&quot;&gt;Ford &amp;amp; Kay (2022)&lt;/a&gt;&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; provides some analysis if you’re interested, under the headings “Psychology Is Fundamental to Decision Making” and “The Purpose of a Decision Theory”; see also &lt;a href=&quot;https://arxiv.org/abs/2306.03275&quot;&gt;Toda (2023)&lt;/a&gt;&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;.) My big question is, why should I care about which theory is metaphysically superior? If the ergodic principle behaves identically to maximizing the logarithm of wealth, then the alleged superiority of the ergodic principle is unfalsifiable.&lt;/p&gt;

&lt;h2 id=&quot;also-its-false&quot;&gt;…also it’s false&lt;/h2&gt;

&lt;p&gt;The ergodic principle recommends maximizing geometric growth. Logarithmic utility recommends maximizing geometric growth. But not everyone wants to maximize geometric growth.&lt;/p&gt;

&lt;p&gt;A geometric-growth-maximizer is abnormally risk-tolerant. Historically, an investor would have maximized geometric growth by investing in stocks with 2:1 to 3:1 leverage. Most people are not comfortable with that level of risk.&lt;/p&gt;

&lt;p&gt;Ergodicity economics recommends that &lt;em&gt;everyone&lt;/em&gt; should pursue the same strategy of maximizing geometric growth. That’s too risky for most people. More generally, not everyone should pursue the same strategy because not everyone has the same risk tolerance. Therefore, the ergodic principle is false.&lt;/p&gt;

&lt;p&gt;Expected utility theory is not so restrictive. Logarithmic utility also implies a high degree of risk tolerance, but most people don’t have logarithmic utility of wealth. Most people are better modeled as having more risk-averse utility functions than that.&lt;/p&gt;

&lt;p&gt;(Gordon Irlam &lt;a href=&quot;https://www.aacalc.com/docs/relative_risk_aversion&quot;&gt;reviewed&lt;/a&gt; research on risk aversion and concluded that most investors are perhaps 2x to 3x more risk-averse than a geometric-growth-maximizer.)&lt;/p&gt;

&lt;p&gt;Some people (not limited to ergodicity proponents) claim that everyone should maximize geometric growth, or everyone should use the &lt;a href=&quot;https://en.wikipedia.org/wiki/Kelly_criterion&quot;&gt;Kelly criterion&lt;/a&gt; (which is equivalent). This is wrong for the same reason that the ergodic principle is wrong: not everyone has the same risk tolerance.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Paul_Samuelson&quot;&gt;Paul Samuelson&lt;/a&gt;, “the Nobel laureate whose mathematical analysis provided the foundation on which modern economics is built”&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;, was apparently as bothered by this misconception as I am, because he wrote a short refutation &lt;a href=&quot;/materials/samuelson1979.pdf&quot;&gt;using only one-syllable words&lt;/a&gt;&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;. An excerpt:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;blockquote&gt;
    &lt;p&gt;He who acts in N plays to make his mean log of wealth as big as it can be made will, with odds that go to one as N soars, beat me who acts to meet my own tastes for risk.&lt;/p&gt;
  &lt;/blockquote&gt;

  &lt;p&gt;Who doubts &lt;em&gt;that&lt;/em&gt;? What we do doubt is that it should make us change our views on gains and losses&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; — should taint our tastes for risk.&lt;/p&gt;

  &lt;p&gt;To be clear is to be found out. Know that life is not a game with a net stake of one when you beat your twin, and with net stake of nought when you do not. A win of ten is not the same as a win of two. Nor is a loss of two the same as a loss of three. &lt;em&gt;How much&lt;/em&gt; you win by counts. &lt;em&gt;How much&lt;/em&gt; you lose by counts.&lt;/p&gt;

  &lt;p&gt;As soon as we see &lt;em&gt;this&lt;/em&gt; clear truth, we are back to our own tastes for risk.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(I recommend reading &lt;a href=&quot;https://mdickens.me/materials/samuelson1979.pdf&quot;&gt;the whole thing&lt;/a&gt;, it’s only two pages long.)&lt;/p&gt;

&lt;p&gt;For a more serious analysis, see &lt;a href=&quot;https://mdickens.me/materials/samuelson1971.pdf&quot;&gt;Samuelson (1971)&lt;/a&gt;&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;. Quoting the abstract:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Because the outcomes of repeated investments or gambles involve products of variables, authorities have repeatedly been tempted to the belief that, in a long sequence, maximization of the expected value of terminal utility can be achieved or well-approximated by a strategy of maximizing at each stage the geometric mean of outcome (or its equivalent, the expected value of the logarithm of principal plus return). The law of large numbers or of the central limit theorem as applied to the logs can validate the conclusion that a maximum-geometric-mean strategy does indeed make it “virtually certain” that, in a “long” sequence, one will end with a higher terminal wealth and utility. However, this does not imply the false corollary that the geometric-mean strategy is optimal for any finite number of periods, however long, or that it becomes asymptotically a good approximation. […] The novel criterion of maximizing the expected average compound return, which asymptotically leads to maximizing of geometric mean, is shown to be arbitrary.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;ergodicity-is-even-worse-than-that&quot;&gt;Ergodicity is even worse than that&lt;/h2&gt;

&lt;p&gt;If ergodicity just meant “use the Kelly criterion” / “maximize geometric mean”, then it would be pointless, and also false, but no worse than those other two principles.&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The ergodic principle is worse than that. It’s &lt;em&gt;not&lt;/em&gt; equivalent to geometric mean maximization, and in fact &lt;em&gt;there is no principled way&lt;/em&gt; to derive geometric mean maximization from ergodicity.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt; There are infinitely many strategies that satisfy the ergodic principle, and ergodicity economics does not offer any reason why “maximize geometric mean” is the correct one.&lt;/p&gt;

&lt;p&gt;Consider the following decision-making rule:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;When offered any bet, wager $0.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This strategy is ergodic because the across-time outcome is the same as the across-individuals outcome:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;If you make many $0 wagers in a row, you won’t gain or lose any money.&lt;/li&gt;
  &lt;li&gt;If many people make $0 wagers simultaneously, the average person won’t gain or lose any money.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is a bad strategy. Even if I offer you an extremely favorable bet, this strategy says you shouldn’t wager any money. But &lt;strong&gt;ergodicity economics can’t say why you should use the Kelly criterion rather than wagering $0.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s go back to Russian Roulette. They say Russian Roulette is bad because it’s non-ergodic. At the risk of being morbid, let me propose an alternative game. The rule of the game is that you load a revolver with six bullets and then shoot yourself in the head.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This game is ergodic!&lt;/strong&gt; If six people play, the average player dies. If one person plays six times, that person dies. The two situations are equal. Therefore, according to ergodicity economics, you should play this game! Or at least, ergodicity economics has no explanation for why you shouldn’t play this game.&lt;/p&gt;

&lt;p&gt;Expected utility theory has no difficulty explaining why it’s bad to shoot yourself with a revolver. It also has no problem explaining why you shouldn’t play Russian Roulette. Why introduce a new decision-making principle that’s less useful than the one we already have, and adds no value?&lt;/p&gt;

&lt;h2 id=&quot;mathematical-problems-for-ergodicity&quot;&gt;Mathematical problems for ergodicity&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This section is adapted from a &lt;a href=&quot;https://forum.effectivealtruism.org/posts/PnW7RAZjCwfsfiExz/exploring-ergodicity-in-the-context-of-longtermism?commentId=F5hjto4T3daDFrHxf&quot;&gt;comment&lt;/a&gt; I wrote a year ago.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Ergodicity economics has more problems, but explaining these problems requires doing math.&lt;/p&gt;

&lt;p&gt;Ole Peters defined a system as “ergodic” if there exists some transformation function \(f\) such that it satisfies the equation&lt;sup id=&quot;fnref:3:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;\begin{align}
\displaystyle\lim\limits_{T \rightarrow \infty} \frac{1}{T} \int\limits_0^T f(W(t)) dt = \int f(W) P(W) dW
\end{align}&lt;/p&gt;

&lt;p&gt;where \(W\) is wealth (or any other thing you care about), \(t\) is time, \(P(W)\) is the probability density of \(W\), and \(T\) is the number of time steps.&lt;/p&gt;

&lt;p&gt;In plain English, there must be some function such that the time average of the function output equals the function’s expected value.&lt;/p&gt;

&lt;p&gt;(This definition is adapted from &lt;a href=&quot;https://mathworld.wolfram.com/BirkhoffsErgodicTheorem.html&quot;&gt;Birkhoff’s erodic theorem&lt;/a&gt;, a theorem in statistical dynamics where the concept of &lt;a href=&quot;https://en.wikipedia.org/wiki/Ergodic_theory&quot;&gt;ergodicity&lt;/a&gt; originates, and where—unlike in economics—it is actually useful.&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;What function is \(f(W)\)? Ergodicity economics does not require you to use any particular function. When discussing multiplicative bets, Peters takes \(f(W) = \log(W)\). If you size your bets so as to maximize the geometric mean of wealth, then indeed you will satisfy the ergodic principle, because the time-limit of log(wealth) equals the expected value of log(wealth).&lt;/p&gt;

&lt;p&gt;But as I discussed in the previous section, there is no clear reason why you have to use \(f(W) = \log(W)\). In fact, there are infinitely many transformation functions that satisfy the ergodic principle for multiplicative bets.&lt;/p&gt;

&lt;p&gt;The function \(f(W) = 0\) is ergodic: its expected value is constant (because the EV is 0), and the finite-time average converges to the EV (because the finite-time average is 0). Ergodicity economics has no way to say that maximizing \(f(W) = \log(W)\) is better than maximizing \(f(W) = 0\).&lt;/p&gt;

&lt;p&gt;Peters &lt;a href=&quot;https://doi.org/10.1063/1.4940236&quot;&gt;claims&lt;/a&gt;&lt;sup id=&quot;fnref:15:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; that&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A rational agent faced with an additive bet (example: 50% chance of winning $2, 50% chance of losing $1) ought to maximize the expected value of \(f(W(t)) = W(t)\).&lt;/li&gt;
  &lt;li&gt;A rational agent faced with a multiplicative bet (example: 50% chance of a 10% return, 50% chance of a –5% return) ought to maximize the expected value of \(f(W(t)) = \log(W(t))\).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These assumptions do not follow from the ergodic principle, but I will temporarily accept them so I can show that ergodicity doesn’t work even when you assume away its inability to select a transformation function.&lt;/p&gt;

&lt;p&gt;Consider a choice between two lotteries:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Lottery A: 50% chance of winning $200, 50% chance of losing $199.&lt;/p&gt;

  &lt;p&gt;Lottery B: 99% chance of multiplying your money by 100x, 1% chance of losing 0.0001% of your money.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Peters’ version of the ergodic principle cannot say which of these lotteries is better. It doesn’t evaluate them using the same units: Lottery A is evaluated in dollars, Lottery B is evaluated in growth rate of dollars.&lt;/p&gt;

&lt;p&gt;If your theory can’t see that Lottery B is better, then your theory is insufficient.&lt;/p&gt;

&lt;p&gt;There is no transformation function that satisfies Peters’ requirement of maximizing geometric growth rate for multiplicative bets (Lottery B) while also being ergodic for additive bets (Lottery A). Maximizing growth rate specifically requires using the exact function
\(f(W(t)) = \log(W(t))\), which does not satisfy ergodicity for additive bets (expected value is not constant with respect to \(t\)).&lt;/p&gt;

&lt;p&gt;In fact, multiplicative bets cannot be compared to any other type of bet, because \(\log(W(t))\) is &lt;em&gt;only&lt;/em&gt; ergodic when \(W(t)\) grows at a constant long-run exponential rate.&lt;/p&gt;

&lt;p&gt;In terms of &lt;a href=&quot;https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem&quot;&gt;Von Neumann-Morgenstern utility&lt;/a&gt;, the ergodic principle violates the axiom of &lt;em&gt;completeness&lt;/em&gt;: there are pairs of bets where it is impossible to say which one is better (and it’s also impossible to say that they’re equal).&lt;/p&gt;

&lt;p&gt;For a more thorough analysis, see &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.4140625&quot;&gt;Psychology Is Fundamental: The Limitations of Growth-Optimal Approaches to Decision Making under Uncertainty&lt;/a&gt;&lt;sup id=&quot;fnref:16:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;. (This paper includes a similar proof of non-completeness, although our two proofs were derived independently.)&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;In summary:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The way it’s used in practice, the ergodic principle is equivalent to logarithmic utility.&lt;/li&gt;
  &lt;li&gt;There is no reason to prefer the ergodic principle over logarithmic utility.&lt;/li&gt;
  &lt;li&gt;Logarithmic utility is often inappropriate. You could use a more- or less-risk-averse utility function instead. But ergodicity economics has no answer for how to handle different levels of risk aversion.&lt;/li&gt;
  &lt;li&gt;The ergodic principle offers no reason why you should maximize geometric growth rate, rather than using some other ergodic decision criterion.&lt;/li&gt;
  &lt;li&gt;The ergodic principle can only provide guidance in a narrow set of situations. In other situations, it has no way of comparing choices.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The concept of ergodicity is complicated enough that it feels like it’s providing useful insights. (Ah, yes, Russian Roulette is bad! Gambling away all your money is bad!) In practice, its main prediction is that people shouldn’t be risk neutral, and this is indeed true, so ergodicity initially looks like it makes useful predictions. But the theory falls apart when you look at it closely.&lt;/p&gt;

&lt;p&gt;Ergodicity economics is nearly 15 years old and still hasn’t produced any useful results. I think it’s time we give up on it.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;You may notice that this definition is underspecified. What exactly does it mean to “look at one trajectory”? People usually interpret it as “look at the geometric mean of the trajectory”, so that’s what I’ll take it to mean for now. But this under-specification is a problem for ergodicity economics, which I will discuss &lt;a href=&quot;#ergodicity-is-even-worse-than-that&quot;&gt;later&lt;/a&gt;. The under-specification isn’t just due to the wording of this particular definition—it’s a fundamental property of ergodicity. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I think the &lt;a href=&quot;https://xkcd.com/793/&quot;&gt;meme&lt;/a&gt; of “physicist encounters a new subject and immediately thinks they can do it better than experts” is overstated. But the stereotype holds true in this case. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Peters, O. (2019). &lt;a href=&quot;/materials/peters2019.pdf&quot;&gt;The ergodicity problem in economics.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1038/s41567-019-0732-0&quot;&gt;10.1038/s41567-019-0732-0&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:3:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:3:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:3:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m playing fast and loose with probability here—it’s not entirely accurate to say that you “will” end up with $0. There is a more precise version of what I said that’s more accurate, but I don’t want to get too technical. I will give a formal mathematical definition &lt;a href=&quot;#mathematical-problems-for-ergodicity&quot;&gt;later&lt;/a&gt;. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;There is another problem with the Russian Roulette example that’s something of a digression, but I will include it in this footnote for completeness:&lt;/p&gt;

      &lt;p&gt;Russian Roulette is not equivalent to playing six iterations of a game with a 5/6 probability of success. In Russian Roulette, you are sampling bullets without replacement, so the probability of finding a bullet goes up every time you win.&lt;/p&gt;

      &lt;p&gt;One of the articles I linked wrote:&lt;/p&gt;

      &lt;blockquote&gt;
        &lt;p&gt;You might roll the dice and take $1,000,000 to play Russian Roulette one time (though I wouldn’t advise it). But there’s no amount of money that would make you play it 6 or more times.&lt;/p&gt;
      &lt;/blockquote&gt;

      &lt;p&gt;If you play 6 times, you have a 100% chance of dying. If you played a version where you sample with replacement (for example, you spin the revolver again after every shot), you are not guaranteed to die after 6 attempts. That would be the appropriate example. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Well, sometimes. It’s not universally applicable, as I will discuss later in this article. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Peters, O., &amp;amp; Gell-Mann, M. (2016). &lt;a href=&quot;https://doi.org/10.1063/1.4940236&quot;&gt;Evaluating gambles using dynamics.&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:15:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ford, M., &amp;amp; Kay, J. (2022). &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.4140625&quot;&gt;Psychology Is Fundamental: The Limitations of Growth-Optimal Approaches to Decision Making under Uncertainty.&lt;/a&gt; &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:16:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Toda, A. (2023). &lt;a href=&quot;https://arxiv.org/abs/2306.03275&quot;&gt;‘Ergodicity Economics’ Is Pseudoscience.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.48550/arXiv.2306.03275&quot;&gt;10.48550/arXiv.2306.03275&lt;/a&gt; &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Frost, G. (2009). &lt;a href=&quot;https://news.mit.edu/2009/obit-samuelson-1213&quot;&gt;Nobel-winning economist Paul A. Samuelson dies at age 94.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Samuelson, P. (1979). &lt;a href=&quot;/materials/samuelson1979.pdf&quot;&gt;Why we should not make mean log of wealth big though years to act are long.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1016/0378-4266(79)90023-2&quot;&gt;10.1016/0378-4266(79)90023-2&lt;/a&gt; &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;“losses” has two syllables. literally unreadable smh &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Samuelson, P. (1971). &lt;a href=&quot;https://mdickens.me/materials/samuelson1971.pdf&quot;&gt;The “Fallacy” of Maximizing the Geometric Mean in Long Sequences of Investing or Gambling.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1073/pnas.68.10.2493&quot;&gt;10.1073/pnas.68.10.2493&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Well, it would be worse in the sense that it’s unnecessarily complicated and confusing. But it wouldn’t produce worse recommendations. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At least, no principled method has ever been devised. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I presume. I haven’t studied statistical dynamics so I don’t really know. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Let's Take a Moment to Marvel At How Bad the Original USDA Food Pyramid Was</title>
				<pubDate>Wed, 21 May 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/05/21/food_pyramid/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/05/21/food_pyramid/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;img src=&quot;https://upload.wikimedia.org/wikipedia/commons/6/6d/USDA_Food_Pyramid.gif&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Edited 2025-05-26 to correct an inaccuracy—originally I said butter goes in the Dairy group but actually it goes in the Fats, Oils &amp;amp; Sweets group.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The original 1992 version of the USDA Food Pyramid was bad. So bad that people who scrupulously followed the guidelines were barely healthier than the people who ignored them.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;But the Food Pyramid was not just wrong: it was &lt;em&gt;marvelously&lt;/em&gt; wrong. It was wrong in many ways simultaneously. It achieved levels of wrongness hitherto undreamed of.&lt;/p&gt;

&lt;p&gt;What was wrong about it? I will start with the obvious answers, and move into the philosophical.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;ol&gt;
  &lt;li&gt;Its ranking of the healthiness of foods is wrong. Refined grains are healthier than fruits and vegetables?&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; Processed meat and nuts are equally healthy? All oils should be used sparingly? What?&lt;/li&gt;
  &lt;li&gt;It lumps together foods that shouldn’t go together: whole grains + refined grains; red meat + healthy proteins + nuts; fats + oils + sweets.
    &lt;ul&gt;
      &lt;li&gt;Nutritionally speaking, white bread has more in common with sweets than it does with whole grains. Oils (unsaturated fats) have more in common with nuts than they do with trans fats.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;It implies there are specific numbers of servings of each food group that you should have, which is wrong in two ways:
    &lt;ul&gt;
      &lt;li&gt;Required servings vary a lot from person to person. If I followed the upper end of the serving guidelines, I would be too skinny (I think—I’m not really sure how much food is in a “serving”). For some people, the lower end is still too much food.&lt;/li&gt;
      &lt;li&gt;Giving a servings range (e.g. 3–5 servings for vegetables) implies that healthiness follows an inverted U curve, and it’s bad to eat too much or too little. But that’s usually not true: there is effectively no such thing as eating too many vegetables.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; There is no such thing as not eating enough trans fat (the ideal amount is zero). You can eat zero grains (keto diet) or zero meat (vegetarian diet) and still be perfectly healthy.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;It fails to say anything about micronutrients or vitamin deficiencies—or even macronutrients for that matter.&lt;/li&gt;
  &lt;li&gt;The whole concept of a “food pyramid” is fundamentally flawed. It rests on the incorrect assumption that there are different food groups that you should eat in different amounts. It would be more accurate to say there are
    &lt;ul&gt;
      &lt;li&gt;some foods you &lt;em&gt;should&lt;/em&gt; eat, and there’s effectively no upper limit (fruits + vegetables);&lt;/li&gt;
      &lt;li&gt;other food groups you can have plenty of as long as you don’t eat too many calories overall (whole grains, beans, nuts, seeds, vegetable oils&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;);&lt;/li&gt;
      &lt;li&gt;and some foods where the ideal amount to eat is zero (trans fats, sugar, refined carbs, processed meat).&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(Bonus wrongness fact: I didn’t notice this until I inspected the food pyramid closely, but it says fruit contains added sugar, as indicated by the white triangles in the “fruit” section. This is so obviously incorrect that it’s kind of baffling why they drew the pyramid this way. Perhaps what they meant to say was that fruit contains sugar, but the key specifically says the white triangle indicates “added” sugar.)&lt;/p&gt;

&lt;p&gt;To illustrate the beauty of the USDA’s achievement, I present to you the Food Wrongness Pyramid:&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/food-wrongness-pyramid.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://www.hsph.harvard.edu/nutritionsource/healthy-eating-plate/&quot;&gt;Healthy Eating Plate&lt;/a&gt;, by the Harvard T.H. Chan School of Public Health, fixes most of the problems with the USDA Food Pyramid:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://nutritionsource.hsph.harvard.edu/wp-content/uploads/2012/09/HEPJan2015.jpg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It uses a plate instead of a pyramid! This is a good shape! It correctly implies that you should eat some of each of the food groups on the plate, and you should try to avoid foods that &lt;em&gt;aren’t&lt;/em&gt; on the plate.&lt;/li&gt;
  &lt;li&gt;It says that there are some foods you should eat, and other foods you should avoid. (Eat whole grains; avoid refined grains.)&lt;/li&gt;
  &lt;li&gt;It correctly identifies which foods are healthy. (No putting starches at the bottom of the pyramid!)&lt;/li&gt;
  &lt;li&gt;It groups foods in a sensible manner. (Whole grains get a group; healthy proteins get a group; healthy oils get a group. There is no “starches” or “meat” or “fats + oils + sugars”.)&lt;/li&gt;
  &lt;li&gt;It suggests relative proportions instead of numbers of servings.&lt;/li&gt;
  &lt;li&gt;It pays attention to macronutrients: the plate includes both protein and oil (= fat). (It doesn’t mention carbs, but that’s okay because it’s pretty much impossible to under-eat carbs.)&lt;/li&gt;
  &lt;li&gt;It still doesn’t say anything about micronutrients, but if you follow the prescribed guidelines, you’ll probably get enough micronutrients anyway.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href=&quot;https://www.myplate.gov/eat-healthy/what-is-myplate&quot;&gt;MyPlate&lt;/a&gt;, the USDA version of the Healthy Eating Plate, is similar to Harvard’s version except that they made it &lt;a href=&quot;https://www.health.harvard.edu/staying-healthy/comparison-of-healthy-eating-plate-and-usda-myplate&quot;&gt;worse&lt;/a&gt;, probably because of lobbyists or whatever (it says red meat counts as a healthy protein; it gives dairy its own category&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;; it doesn’t say anything about healthy fats).&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/MyPlate.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This is a side note but I think it’s funny how everyone puts nuts in the “protein” bucket even though most nuts have about as much protein as grains (which is to say, not very much). I would rather put nuts in the “oil” group…obviously nuts aren’t oil, but both nuts and oil are desirable as a source of unsaturated fat, so they should go together.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;McCullough, M. L., et al. (2000). “Adherence to the Dietary Guidelines for Americans and Risk of Major Chronic Disease in Men.”&lt;/p&gt;

      &lt;p&gt;McCullough, M. L., et al. (2000). “Adherence to the Dietary Guidelines for Americans and Risk of Major Chronic Disease in Women.” &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically it doesn’t say grains are healthier than fruits/veggies, but it puts them lower on the pyramid, which naturally leads you to believe that they’re healthier. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m sure someone in history has over-eaten vegetables at some point, but practically speaking you’re probably never going to get there. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;and probably also poultry, fish, and eggs, but eating those is bad for animals. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Plus a short list of exceptions like sodium and fat-soluble vitamins, where you want to eat some but not too much. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The items on this pyramid don’t match up with the items in my article because I revised the article a bit and I didn’t feel like re-making the image. I put a ton of work into my pyramid, as you can probably tell by the intricate and high-quality illustrations. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The USDA MyPlate website &lt;a href=&quot;https://www.myplate.gov/eat-healthy/dairy&quot;&gt;says&lt;/a&gt; soy milk counts as dairy. Which, like, I guess I get what they were going for, but why is this category called “dairy”? &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Can You Maintain Lean Mass in a Calorie Deficit?</title>
				<pubDate>Thu, 01 May 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/05/01/resistance_training_calorie_deficit/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/05/01/resistance_training_calorie_deficit/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;If you’re losing weight, does lifting weights reduce how much muscle you lose? Is it possible to entirely prevent muscle loss (or even gain muscle)?&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://onlinelibrary.wiley.com/doi/full/10.1111/sms.14075&quot;&gt;Murphy &amp;amp; Koehler (2021)&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; did a meta-analysis on this question. They collected experiments where the experimental groups did resistance training while eating at an energy deficit (RT+ED), and the control groups did resistance training while eating a normal amount of food (RT+CON).&lt;/p&gt;

&lt;p&gt;They found a strong association between change in lean mass and the magnitude of the energy deficit (slope = –0.325, p = 0.001). The meta-analysis predicts that you can eat at a deficit of 500 calories per day without losing any lean mass, but you will lose mass at a larger deficit.&lt;/p&gt;

&lt;p&gt;(The meta-analysis also reported that participants gained strength in almost every study, even with larger calorie deficits. That’s useful to know, but I will focus on lean mass for this post.)&lt;/p&gt;

&lt;p&gt;I should mention that what we actually care about is muscle loss, not lean mass loss. Lean mass includes anything that isn’t fat—muscle fibers, organs, &lt;a href=&quot;https://en.wikipedia.org/wiki/Glycogen&quot;&gt;glycogen&lt;/a&gt;, etc. Muscle mass is harder to measure. We don’t know what happened to study participants’ muscle, only their total lean mass.&lt;/p&gt;

&lt;p&gt;Let’s set that aside and assume lean mass is a useful proxy for muscle mass.&lt;/p&gt;

&lt;p&gt;The authors showed a plot of every individual study’s experimental group (RT+ED) and control group (RT+CON), along with a regression line predicting lean mass change as a function of energy deficit:&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/RT+ED-and-RT+CON.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;But…does this regression line look a little odd to you?&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;Where are the RT+CON points, and where are the RT+ED points, relative to the regression line?&lt;/p&gt;

&lt;p&gt;In particular, look at all the data points from experimental groups where participants had energy deficits of under 500 calories. Almost all of them lost lean mass on average (recall that each individual point represents the average result from one study); only four gained lean mass.&lt;/p&gt;

&lt;p&gt;The slope of the regression line is almost entirely driven by the difference between the experimental and control groups.&lt;/p&gt;

&lt;p&gt;What happens if we calculate a regression using only the experimental groups? Did groups with a bigger calorie deficit lose more lean mass?&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/RT+ED.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Now the regression has a slope of only –0.123 (p = 0.28), and it predicts that any deficit will cause at least a small loss in lean mass.&lt;/p&gt;

&lt;p&gt;So, among study groups where participants ate at a calorie deficit, it does appear that they lost lean mass on average. But there is not a clear relationship between the &lt;em&gt;size&lt;/em&gt; of the deficit and the amount of lean mass lost.&lt;/p&gt;

&lt;p&gt;In theory, it makes sense that when you have a larger calorie deficit, it should be harder for your body to preserve muscle. But the evidence from this meta-analysis doesn’t really support the theory. (It doesn’t contradict it, either. It just doesn’t say much either way.)&lt;/p&gt;

&lt;p&gt;Murphy &amp;amp; Koehler (2021)’s original regression had a slope of –0.325. If that’s the true slope, then it’s unlikely that the experimental-only regression would have the much shallower slope of –0.123 (p = 0.07, likelihood ratio 5.2&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;The conclusions I drew from this meta-analysis:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Eating at a deficit might cause me to lose muscle, or it might not, who knows.&lt;/li&gt;
  &lt;li&gt;In theory, I expect there is some calorie deficit above which I start to lose muscle, but this meta-analysis doesn’t tell me what that number is.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;appendix-what-else-does-this-data-tell-us&quot;&gt;Appendix: What else does this data tell us?&lt;/h2&gt;

&lt;p&gt;Looking at the experimental-only regression, how much evidence is this for or against the hypothesis that a larger calorie deficit causes more lean mass loss?&lt;/p&gt;

&lt;p&gt;The experimental-only regression has slope –0.123 with standard error 0.111. The slope is still negative, which is consistent with the hypothesis, but it’s not strongly negative—only about one standard error away from zero.&lt;/p&gt;

&lt;p&gt;The hypothesis predicts a positive intercept: it should be possible to gain muscle while maintaining weight. The experimental-only regression has a negative intercept (–0.054), but it is less than one standard error away from zero (the intercept’s standard error is 0.079). This is weak evidence against the hypothesis.&lt;/p&gt;

&lt;p&gt;Source code is available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/calorie_deficit.py&quot;&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Murphy, C., &amp;amp; Koehler, K. (2021). &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/full/10.1111/sms.14075&quot;&gt;Energy deficiency impairs resistance training gains in lean mass but not strength: A meta-analysis and meta-regression.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1111/sms.14075&quot;&gt;10.1111/sms.14075&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The paper’s plot is in black and white. I re-created it in color to make it easier to read. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;p-value is calculated using a two-sided t-test, where the null hypothesis is that the mean equals –0.3249. Standard error 0.111 which is the standard error of the experimental-only slope.&lt;/p&gt;

      &lt;p&gt;Likelihood ratio is calculated as norm.pdf(–0.1234, mu=–0.1234, sigma=0.111) / norm.pdf(–0.1234, mu=–0.3249, sigma=0.111). &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Why Would AI Companies Use Human-Level AI to Do Alignment Research?</title>
				<pubDate>Fri, 25 Apr 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/04/25/bootstrapped_alignment/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/04/25/bootstrapped_alignment/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;img src=&quot;/assets/images/plans-for-the-future.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Many plans for how to safely build superintelligent AI have a critical section that goes like this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Develop AI that’s powerful enough to do AI research, but not yet powerful enough to pose an existential threat.&lt;/li&gt;
  &lt;li&gt;Use it to assist with alignment research, thus greatly accelerating the pace of work—hopefully enough to solve all alignment problems.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You could call this process “alignment bootstrapping”.&lt;/p&gt;

&lt;p&gt;This is a central feature of &lt;a href=&quot;https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/&quot;&gt;DeepMind’s plan&lt;/a&gt; (see “Amplified oversight”), &lt;a href=&quot;https://www.anthropic.com/news/core-views-on-ai-safety&quot;&gt;Anthropic’s plan&lt;/a&gt; (see “Scalable Oversight”), and independent plans written by &lt;a href=&quot;https://sleepinyourhat.github.io/checklist/&quot;&gt;Sam Bowman&lt;/a&gt; (an AI safety manager at Anthropic), &lt;a href=&quot;https://www.lesswrong.com/posts/8vgi3fBWPFDLBBcAx/planning-for-extreme-ai-risks&quot;&gt;Joshua Clymer&lt;/a&gt; (a researcher at Redwood Research), and &lt;a href=&quot;https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan&quot;&gt;Marius Hobbhahn&lt;/a&gt; (CEO of Apollo Research).&lt;/p&gt;

&lt;p&gt;There are various reasons why alignment bootstrapping could fail&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; even if implemented well, and some of those plans acknowledge this. But I’m also concerned about whether alignment bootstrapping will be implemented at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When the time comes, will AI companies actually spend their resources on alignment bootstrapping?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When AI companies have human-level AI systems, will they &lt;em&gt;use them for alignment research&lt;/em&gt;, or will they use them (mostly) to advance capabilities instead?&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;AI companies currently employ many human-level &lt;em&gt;humans&lt;/em&gt;, and use a small percentage of them to do alignment research. If it makes sense for them to use most of their human-level AIs to do alignment research, wouldn’t it also make sense to use most of their &lt;em&gt;human&lt;/em&gt; researchers to do alignment research?&lt;/p&gt;

&lt;p&gt;But they don’t do that. Most of their human researchers work on advancing AI capabilities.&lt;/p&gt;

&lt;p&gt;It’s more likely that they use human-level AIs the same way they use human researchers: almost all of them work on accelerating capabilities, and a small minority work on safety. Which probably means capabilities outpace safety, which probably means we die.&lt;/p&gt;

&lt;p&gt;Some companies argue that they &lt;em&gt;need to&lt;/em&gt; advance capabilities right now to stay competitive. Perhaps that’s true. Consider what the world will look like once the first company develops human-level AI. At that point, the #2 company will only be a few months behind at most. So the leading company will once again say, “Sorry, we can’t use our human-level AI to work on alignment, we have to keep advancing capabilities to stay ahead.” And they will continue saying this right up until their AI is powerful enough to kill everyone.&lt;/p&gt;

&lt;p&gt;Counterpoint: AI companies would probably argue that present-day AIs are far from being dangerous. But human-level AIs will be &lt;em&gt;nearly&lt;/em&gt; dangerous, so at that point it’s too risky to keep advancing capabilities.&lt;/p&gt;

&lt;p&gt;I would be more inclined to believe this if AI companies weren’t already behaving so &lt;a href=&quot;https://mdickens.me/2024/11/18/where_i_am_donating_in_2024/#parallel-safetycapabilities-vs-slowing-ai&quot;&gt;recklessly&lt;/a&gt;.&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; If you’re going to prioritize safety over capabilities when the tradeoff becomes more critical, you should prove it to the world by prioritizing safety over capabilities &lt;em&gt;right now&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Perhaps the ideal perfectly-altruistic AI company would indeed push capabilities right now and then switch to safety at the critical time,&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; but I see little reason to believe that that’s what any of the real-life AI companies are going to do.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;By my reading, none of the plans put probabilities on how concerning these reasons are. My guess is that, if alignment bootstrapping is implemented as these plans typically describe, then there’s a greater than 50% chance that we die.&lt;/p&gt;

      &lt;p&gt;The purpose of this essay isn’t to talk about the implementation problems with alignment bootstrapping, but in brief:&lt;/p&gt;

      &lt;p&gt;If your alignment-researcher AI is smarter than you, and you don’t know how to align AI yet, then you can’t trust that your AI is doing good work.&lt;/p&gt;

      &lt;p&gt;People who propose bootstrapping are usually aware of this problem. They have preliminary ideas for how they will evaluate the work of an AI that’s smarter than them, coupled with bafflingly high confidence that their untested ideas will work. (Zvi &lt;a href=&quot;https://www.lesswrong.com/posts/hvEikwtsbf6zaXG2s/on-google-s-safety-plan#A_Problem_For_Future_Earth&quot;&gt;proposed a test&lt;/a&gt;: “Can you get a method whereby the Man On The Street can use AI help to code and evaluate graduate level economics outputs and the quality of poetry and so on in ways that would translate to this future parallel situation?”) &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I wanted to provide a link to a well-sourced and well-reasoned list of reckless behaviors by AI companies. I found no such list, so instead this is a link to a section of a post I wrote that includes numerous examples of reckless behavior. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I don’t actually think this is what safety-minded AI companies should do. I think they should spend less on capabilities and more on safety. But I am sympathetic to the position that they should temporarily focus on advancing capabilities. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Do Protests Work? A Critical Review</title>
				<pubDate>Fri, 18 Apr 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/04/18/protest_outcomes_critical_review/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/04/18/protest_outcomes_critical_review/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;James Özden and Sam Glover at &lt;a href=&quot;https://www.socialchangelab.org/&quot;&gt;Social Change Lab&lt;/a&gt; wrote a &lt;a href=&quot;https://www.socialchangelab.org/_files/ugd/503ba4_94d84534d5b348468739b0d6a36b3940.pdf&quot;&gt;literature review on protest outcomes&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; as part of a broader &lt;a href=&quot;https://www.socialchangelab.org/_files/ugd/503ba4_052959e2ee8d4924934b7efe3916981e.pdf&quot;&gt;investigation&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that’s what I’m going to do today.&lt;/p&gt;

&lt;p&gt;I reviewed the evidence on protest outcomes, focusing only on the &lt;strong&gt;highest-quality research&lt;/strong&gt;, to answer two questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Do protests work?&lt;/li&gt;
  &lt;li&gt;Are Social Change Lab’s conclusions consistent with the highest-quality evidence?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s what I found:&lt;/p&gt;

&lt;p&gt;Do protests work? &lt;strong&gt;Highly likely&lt;/strong&gt; (credence: 90%) in certain contexts, although it’s unclear how well the results generalize. &lt;a href=&quot;#meta-analysis&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Are Social Change Lab’s conclusions consistent with the highest-quality evidence? &lt;strong&gt;Yes&lt;/strong&gt;—the report’s core claims are well-supported, although it overstates the strength of some of the evidence. &lt;a href=&quot;#are-social-change-labs-claims-justified&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/v6PtkcfZQAHR2Cgmx/do-protests-work-a-critical-review&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#studies-on-real-world-protest-outcomes&quot; id=&quot;markdown-toc-studies-on-real-world-protest-outcomes&quot;&gt;Studies on real-world protest outcomes&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#madestam-et-al-2013-on-tea-party-protests&quot; id=&quot;markdown-toc-madestam-et-al-2013-on-tea-party-protests&quot;&gt;Madestam et al. (2013) on Tea Party protests&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#placebo-tests&quot; id=&quot;markdown-toc-placebo-tests&quot;&gt;Placebo tests&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#wasow-2020-on-1960s-civil-rights-protests&quot; id=&quot;markdown-toc-wasow-2020-on-1960s-civil-rights-protests&quot;&gt;Wasow (2020) on 1960s civil rights protests&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#klein-teeselink--melios-2021-on-2020-black-lives-matter-protests&quot; id=&quot;markdown-toc-klein-teeselink--melios-2021-on-2020-black-lives-matter-protests&quot;&gt;Klein Teeselink &amp;amp; Melios (2021) on 2020 Black Lives Matter protests&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#failed-placebo-tests&quot; id=&quot;markdown-toc-failed-placebo-tests&quot;&gt;Failed placebo tests&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#larreboure--gonzález-2021-on-the-womens-march&quot; id=&quot;markdown-toc-larreboure--gonzález-2021-on-the-womens-march&quot;&gt;Larreboure &amp;amp; González (2021) on the Women’s March&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#hungerman--moorthy-2023-on-earth-day&quot; id=&quot;markdown-toc-hungerman--moorthy-2023-on-earth-day&quot;&gt;Hungerman &amp;amp; Moorthy (2023) on Earth Day&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#meta-analysis&quot; id=&quot;markdown-toc-meta-analysis&quot;&gt;Meta-analysis&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#potential-problems-with-the-research&quot; id=&quot;markdown-toc-potential-problems-with-the-research&quot;&gt;Potential problems with the research&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#spatial-autocorrelation&quot; id=&quot;markdown-toc-spatial-autocorrelation&quot;&gt;Spatial autocorrelation&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#publication-bias&quot; id=&quot;markdown-toc-publication-bias&quot;&gt;Publication bias&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#data-fabrication&quot; id=&quot;markdown-toc-data-fabrication&quot;&gt;Data fabrication&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#data-errors&quot; id=&quot;markdown-toc-data-errors&quot;&gt;Data errors&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#will-the-results-generalize&quot; id=&quot;markdown-toc-will-the-results-generalize&quot;&gt;Will the results generalize?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#meta-concerns-with-this-meta-analysis&quot; id=&quot;markdown-toc-meta-concerns-with-this-meta-analysis&quot;&gt;Meta-concerns with this meta-analysis&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#are-social-change-labs-claims-justified&quot; id=&quot;markdown-toc-are-social-change-labs-claims-justified&quot;&gt;Are Social Change Lab’s claims justified?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#broad-claims&quot; id=&quot;markdown-toc-broad-claims&quot;&gt;Broad claims&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#claims-about-individual-studies&quot; id=&quot;markdown-toc-claims-about-individual-studies&quot;&gt;Claims about individual studies&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#source-code&quot; id=&quot;markdown-toc-source-code&quot;&gt;Source code&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-a-additional-tables&quot; id=&quot;markdown-toc-appendix-a-additional-tables&quot;&gt;Appendix A: Additional tables&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-b-methodological-revisions&quot; id=&quot;markdown-toc-appendix-b-methodological-revisions&quot;&gt;Appendix B: Methodological revisions&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-c-comparing-the-strength-of-evidence-to-saturated-fat-research&quot; id=&quot;markdown-toc-appendix-c-comparing-the-strength-of-evidence-to-saturated-fat-research&quot;&gt;Appendix C: Comparing the strength of evidence to saturated fat research&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review.&lt;/p&gt;

&lt;p&gt;Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Animal Charity Evaluators (2018), &lt;a href=&quot;https://animalcharityevaluators.org/research/reports/protests/&quot;&gt;Protest Intervention Report.&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Orazani et al. (2021), &lt;a href=&quot;https://doi.org/10.1002/ejsp.2722&quot;&gt;Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis.&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Social Change Lab – Ozden &amp;amp; Glover (2022), &lt;a href=&quot;https://www.socialchangelab.org/_files/ugd/503ba4_94d84534d5b348468739b0d6a36b3940.pdf&quot;&gt;Literature Review: Protest Outcomes.&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Shuman et al. (2024), &lt;a href=&quot;https://doi.org/10.1016/j.tics.2023.10.003&quot;&gt;When Are Social Protests Effective?&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018).&lt;/p&gt;

&lt;p&gt;Orazani et al. (2021)&lt;sup id=&quot;fnref:50&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:50&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters’ cause. But what people say in a lab setting might not carry over to real-life behavior.&lt;/p&gt;

&lt;p&gt;I read through Shuman et al. (2024). Compared to Ozden &amp;amp; Glover (2022), it cited weaker evidence and made a larger number of claims with thinner support.&lt;/p&gt;

&lt;p&gt;I looked through these literature reviews to find relevant studies. The Social Change Lab review was by far the most useful; the other reviews didn’t include any additional studies meeting my criteria. I used ChatGPT Deep Research&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; to find more publications.&lt;/p&gt;

&lt;p&gt;I focused my critical analysis on only the highest-quality evidence:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I did not review lab experiments. The Orazani et al. meta-analysis is informative, but it might not generalize to the real world.&lt;/li&gt;
  &lt;li&gt;There are many studies showing an association between protests and real-world outcomes (voting patterns, government policy, corporate behavior, etc.), but the vast majority of them are observational.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Observational studies cannot establish causation. They cannot distinguish between “protests raised support for the cause” and “protests happened because people supported the cause”. No amount of &lt;a href=&quot;https://dynomight.net/control/&quot;&gt;controlling for confounders&lt;/a&gt; fixes this problem.&lt;/p&gt;

&lt;p&gt;Therefore, my review focuses only on natural experiments that measure real-world outcomes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conflict of interest:&lt;/strong&gt; In 2024 I &lt;a href=&quot;https://mdickens.me/2024/11/18/where_i_am_donating_in_2024/&quot;&gt;donated&lt;/a&gt; to &lt;a href=&quot;https://www.pauseai-us.org/&quot;&gt;PauseAI US&lt;/a&gt;, which organizes protests. I would prefer to find that protests work.&lt;/p&gt;

&lt;h1 id=&quot;studies-on-real-world-protest-outcomes&quot;&gt;Studies on real-world protest outcomes&lt;/h1&gt;

&lt;p&gt;Social Change Lab reviewed five studies on how protests affect voter behavior, which they judged to be the best studies on the subject.&lt;/p&gt;

&lt;p&gt;I excluded two of the five studies due to methodological concerns:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1177/0003122414555885&quot;&gt;McVeigh et al. (2014)&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; is an observational study that looked at long-term changes in Republican voting in counties where the Ku Klux Klan was most active.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1111/1475-6765.12375&quot;&gt;Bremer et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; is a study on the correlation between protests and electoral outcomes in European countries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I did not review these because they are observational studies, and I wanted to focus on natural experiments.&lt;/p&gt;

&lt;p&gt;I did review the other three studies:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Madestam, A., Shoag, D., Veuger, S., &amp;amp; Yanagizawa-Drott, D. (2013). &lt;a href=&quot;https://doi.org/10.1093/qje/qjt021&quot;&gt;Do Political Protests Matter? Evidence from the Tea Party Movement.&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Wasow, O. (2020). &lt;a href=&quot;https://doi.org/10.1017/S000305542000009X&quot;&gt;Agenda Seeding: How 1960s Black Protests Moved Elites, Public Opinion and Voting.&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Klein Teeselink, B. K., &amp;amp; Melios, G. (2021). &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.3809877&quot;&gt;Weather to Protest: The Effect of Black Lives Matter Protests on the 2020 Presidential Election.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition, I looked at two studies that the Social Change Lab report did not cover:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Larreboure, M., &amp;amp; Gonzalez, F. (2021). &lt;a href=&quot;https://mlarreboure.com/womenmarch.pdf&quot;&gt;The Impact of the Women’s March on the U.S. House Election.&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Hungerman, D., &amp;amp; Moorthy, V. (2023). &lt;a href=&quot;/materials/Earth-Day.pdf&quot;&gt;Every Day Is Earth Day: Evidence on the Long-Term Impact of Environmental Activism.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are no randomized controlled trials on the real-world effect of protests (how would you randomly assign protests to occur?). But there are five natural experiments—three from the Social Change Lab review, plus the Women’s March and Earth Day studies. Most of the natural experiments use the &lt;strong&gt;rainfall method&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The idea is that protests often get canceled when it rains. If you look at voting patterns in places where it rained on protest day compared to where it didn’t rain, you should be able to isolate the causal effect of protests. The rain effectively randomizes where protests occur.&lt;/p&gt;

&lt;p&gt;Rather than using rainfall directly, the rainfall method uses rainfall &lt;em&gt;shocks&lt;/em&gt;—that is, unexpectedly high or low rainfall relative to what was expected for that location and date. This avoids any confounding effect of average rainfall levels.&lt;/p&gt;

&lt;p&gt;The clearest illustration of the rainfall method comes from &lt;a href=&quot;/materials/Earth-Day.pdf&quot;&gt;Hungerman &amp;amp; Moorthy (2023)&lt;/a&gt;&lt;sup id=&quot;fnref:36&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; (which I will discuss in more detail &lt;a href=&quot;#hungerman--moorthy-2023-on-earth-day&quot;&gt;later&lt;/a&gt;). The authors looked at counties where it rained vs. didn’t rain on the inaugural Earth Day—April 22, 1970. Then they used rainfall to predict the rate of birth defects from 1980–1988.The hypothesis is that Earth Day demonstrations increased support for environmental protections. That in turn would reduce environmental contaminants, leading to fewer birth defects. And if rainfall stops demonstrations from happening, then it will have the opposite effect.&lt;/p&gt;

&lt;p&gt;The rainfall method is commonly used in social science, and it has received some fair criticism.&lt;sup id=&quot;fnref:46&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:46&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:47&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:47&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; But the rainfall method as it was used by Hungerman &amp;amp; Moorthy is robust to these criticisms, as illustrated by this chart:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Earth-Day-birth-defects.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The key to establishing causation is that rainfall had no predictive power on any other day. It only mattered &lt;em&gt;on Earth Day&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;That leaves us with two possibilities:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Rainfall is associated with higher birth defects due to some confounding variable, but only rainfall on April 22 and not on any other day, because that day is special somehow, in a way that has nothing to do with Earth Day; or&lt;/li&gt;
  &lt;li&gt;Earth Day demonstrations reduced the rate of birth defects.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(Or the results could be due to a statistical error or data manipulation. I will discuss those possibilities later.)&lt;/p&gt;

&lt;p&gt;A summary of the five studies I reviewed plus the two I declined to review:&lt;/p&gt;

&lt;div id=&quot;table-1&quot; style=&quot;text-align:center;&quot;&gt;Table 1: Summary of Studies&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Study&lt;/th&gt;
      &lt;th&gt;Protest&lt;/th&gt;
      &lt;th&gt;Protest Type&lt;/th&gt;
      &lt;th&gt;Effect&lt;/th&gt;
      &lt;th&gt;Randomization Method&lt;/th&gt;
      &lt;th&gt;Quality&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Madestam et al.&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;nonviolent&lt;/td&gt;
      &lt;td&gt;+&lt;/td&gt;
      &lt;td&gt;rainfall&lt;/td&gt;
      &lt;td&gt;high&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Wasow&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;Civil Rights&lt;/td&gt;
      &lt;td&gt;violent&lt;/td&gt;
      &lt;td&gt;-&lt;/td&gt;
      &lt;td&gt;rainfall&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Wasow&lt;sup id=&quot;fnref:9:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;Civil Rights&lt;/td&gt;
      &lt;td&gt;nonviolent&lt;/td&gt;
      &lt;td&gt;+&lt;/td&gt;
      &lt;td&gt;none (observational)&lt;/td&gt;
      &lt;td&gt;low&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Klein Teeselink &amp;amp; Melios&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;BLM&lt;/td&gt;
      &lt;td&gt;nonviolent&lt;/td&gt;
      &lt;td&gt;+&lt;/td&gt;
      &lt;td&gt;rainfall&lt;/td&gt;
      &lt;td&gt;high&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Larreboure &amp;amp; González&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;Women’s March&lt;/td&gt;
      &lt;td&gt;nonviolent&lt;/td&gt;
      &lt;td&gt;+&lt;/td&gt;
      &lt;td&gt;weather shocks&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Hungerman &amp;amp; Moorthy&lt;sup id=&quot;fnref:36:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;Earth Day&lt;/td&gt;
      &lt;td&gt;nonviolent&lt;/td&gt;
      &lt;td&gt;+&lt;/td&gt;
      &lt;td&gt;rainfall&lt;/td&gt;
      &lt;td&gt;high&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;McVeigh et al.&lt;sup id=&quot;fnref:2:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;KKK activity&lt;/td&gt;
      &lt;td&gt;unclear&lt;/td&gt;
      &lt;td&gt;+&lt;/td&gt;
      &lt;td&gt;none (observational)&lt;/td&gt;
      &lt;td&gt;low&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Bremer et al.&lt;sup id=&quot;fnref:3:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;Europe elections&lt;/td&gt;
      &lt;td&gt;nonviolent&lt;/td&gt;
      &lt;td&gt;?&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;none (observational)&lt;/td&gt;
      &lt;td&gt;low&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(Methodological quality is relative. I’d have higher confidence in a true experiment than in any of these quasi-experimental methods.)&lt;/p&gt;

&lt;p&gt;Next I will review each study individually. Then I will collect the results into a meta-analysis.&lt;/p&gt;

&lt;h2 id=&quot;madestam-et-al-2013-on-tea-party-protests&quot;&gt;Madestam et al. (2013) on Tea Party protests&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;/materials/TeaParty_Protests.pdf&quot;&gt;Madestam et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:4:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; looked at the effect of 2009 Tea Party protests on the 2012 US elections. It used the rainfall method to establish causality.&lt;/p&gt;

&lt;p&gt;As an additional check, the authors tested whether rainfall could predict Republican and Democratic vote shares in the 2008 election. (You may recall that the 2009 Tea Party protests did not occur until a year after the 2008 election.) If rainfall can predict the 2008 election results—before the protests occurred—that means the model was confounded.&lt;/p&gt;

&lt;p&gt;Madestam et al. (2013) found that rainfall in 2009 could &lt;em&gt;not&lt;/em&gt; predict votes in 2008 (see Table II), but it &lt;em&gt;could&lt;/em&gt; predict votes in 2012 (see Table VI).&lt;/p&gt;

&lt;p&gt;The authors also tested whether rainfall on other days prior to the Tea Party protests could predict 2009 voting patterns, and found that they could not.&lt;/p&gt;

&lt;p&gt;In the authors’ model, a rainy protest decreased Republicans’ share of the vote in the 2012 election by 1.04 percentage points (p &amp;lt; 0.0006&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;). This suggests that protests did indeed increase the Republican vote share.&lt;/p&gt;

&lt;p&gt;Interestingly, rainfall decreased Republican vote share relative to the total population, but did not increase the Democratic share. This suggests that protests increased voter turnout but did not cause voters to change their minds.&lt;/p&gt;

&lt;p&gt;At first I thought the inability to predict 2008 votes might be a false negative (like a p = 0.06 situation), but this was not the case. Rainfall in 2009 &lt;em&gt;increased&lt;/em&gt; Republicans’ vote share in 2008, although only slightly (p = 0.38). (Remember that rainfall is supposed to decrease Republican votes by preventing Tea Party protests from happening.)&lt;/p&gt;

&lt;p&gt;There is another concern with the rainfall model—not with causality, but with overstating the strength of evidence. A standard statistical model assumes that all observations are independent. But rainfall is &lt;strong&gt;spatially autocorrelated&lt;/strong&gt;, which is the statistical way of saying that rain in one county is not independent of rainfall in the neighboring counties. If you have data from 2,758 counties, you can’t treat them as 2,758 independent samples.&lt;/p&gt;

&lt;p&gt;Madestam et al. (2013) used several methods to account for this. First, it clustered standard errors at the state level instead of at the county level. Second, as a robustness check, the authors assumed spatial correlations varied as an inverse function of distance, which produced similar standard errors. Third, the authors tried dropping states one at a time to see if any states overly influenced the results.&lt;/p&gt;

&lt;h3 id=&quot;placebo-tests&quot;&gt;Placebo tests&lt;/h3&gt;

&lt;p&gt;Finally:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[W]e conduct a series of placebo tests using rainfall on other historical dates in April. These placebos are drawn from the same spatially correlated distribution as rainfall on April 15, 2009. If rainfall on the protest day has a causal effect, the actual estimate of rainfall ought to be an outlier in the distribution of placebo coefficients.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;They calculated the “placebo p-value” as the probability that rainfall on a random day could predict outcomes better than rainfall on protest day. If the model has correctly accounted for spatial autocorrelation then the placebo p-value should equal the original model p-value, plus or minus some random variation.&lt;/p&gt;

&lt;p&gt;The authors run tests on 627 random “placebo dates”, and find that rain on protest day had a larger effect size than almost any of the placebo dates (see Figure V). This suggests that their corrections for spatial correlation worked, making false positives unlikely. However, the p-values on Figure V were a bit higher than the p-values in the main text, suggesting some effect size inflation due to spatial autocorrelation.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Tea-Party-Figure-V.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;wasow-2020-on-1960s-civil-rights-protests&quot;&gt;Wasow (2020) on 1960s civil rights protests&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;/materials/1960s_Black_Protests.pdf&quot;&gt;Wasow (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:9:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; measured the effect of nonviolent protests using observational data only. I won’t discuss that portion of the paper.&lt;/p&gt;

&lt;p&gt;Wasow applied the quasi-experimental rainfall model to &lt;em&gt;violent&lt;/em&gt; protests and found that they had a significant backfire effect. I won’t focus on the evidence on violent protests because I would recommend against engaging in violence regardless of what result the study found.&lt;/p&gt;

&lt;p&gt;But if violent protests decrease public support, that’s (weak) evidence against protests working in general. The simplest hypothesis is “protests work”. But evidence on violent protests contradicts this, requiring a more complex claim: “nonviolent protests work, violent protests backfire”. I will evaluate this two-part hypothesis in the &lt;a href=&quot;#meta-analysis&quot;&gt;meta-analysis&lt;/a&gt; below.&lt;/p&gt;

&lt;p&gt;As some additional evidence on violent protests, &lt;a href=&quot;/materials/Riots_Property.pdf&quot;&gt;Collins &amp;amp; Margo (2007)&lt;/a&gt;&lt;sup id=&quot;fnref:48&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:48&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; used the rainfall method to find that 1960s riots decreased nearby property values. This is consistent with the finding from Wasow (2020) that violent protests backfire, but property values are not directly relevant to protesters’ outcomes. It’s conceivable that protests could simultaneously decrease local property values and increase public support.&lt;/p&gt;

&lt;p&gt;Replication data from Wasow (2020) is &lt;a href=&quot;https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/HVRCKM&quot;&gt;publicly available&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;klein-teeselink--melios-2021-on-2020-black-lives-matter-protests&quot;&gt;Klein Teeselink &amp;amp; Melios (2021) on 2020 Black Lives Matter protests&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.3809877&quot;&gt;Klein Teeselink &amp;amp; Melios (2021)&lt;/a&gt;&lt;sup id=&quot;fnref:6:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; used the rainfall method to establish the effect of Black Lives Matter protests on the 2020 presidential election.&lt;/p&gt;

&lt;p&gt;Unlike Madestam et al. (2013), this paper did not test whether rainfall could predict outcomes &lt;em&gt;before&lt;/em&gt; the protests (which would indicate confounding).&lt;/p&gt;

&lt;p&gt;As with Madestam et al. (2013), the authors of this paper considered the fact that rainfall is not independent across counties. Their model adjusts for this by including independent variables to represent the change in vote shares in surrounding counties, scaled by inverse distance.&lt;/p&gt;

&lt;p&gt;Unlike the other studies in this review, Klein Teeselink &amp;amp; Melios (2021) treated county vote changes as interdependent. Their model assumes that the change in vote share in one county is partially explained by vote changes in the nearby counties, using the method described by &lt;a href=&quot;/materials/beck2006.pdf&quot;&gt;Beck et al. (2006)&lt;/a&gt;&lt;sup id=&quot;fnref:45&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:45&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Klein Teeselink &amp;amp; Melios’ model isolates the impact of &lt;em&gt;local&lt;/em&gt; protests on &lt;em&gt;local&lt;/em&gt; vote change. In the method of (e.g.) Madestam et al. (2013), some vote changes may be explained by protests in &lt;em&gt;neighboring counties&lt;/em&gt;. Klein Teeselink &amp;amp; Melios’ method is more rigorous in a sense, but we don’t actually &lt;em&gt;want&lt;/em&gt; to isolate local changes. We want to know how well protests work &lt;em&gt;overall&lt;/em&gt;, not just their local effects.&lt;/p&gt;

&lt;p&gt;Klein Teeselink &amp;amp; Melios performed a robustness check in Table A3, Panel D where they fully ignored spatial autocorrelation. This produced mean effects more in line with the other studies: a vote share change of 11.9 per protester (std err 2.9), and a change of 0.105 based on the probability of rain (std err 0.032).&lt;/p&gt;

&lt;p&gt;If you ignore spatial autocorrelation, you may overestimate the strength of evidence. However, in this case, ignoring spatial autocorrelation had only a modest impact on the t-stats:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Primary Model&lt;/th&gt;
      &lt;th&gt;Ignoring Spatial Autocorrelation&lt;/th&gt;
      &lt;th&gt;Ignoring Spatial Autocorrelation + Counties Weighted by Population&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share Per Protester&lt;/td&gt;
      &lt;td&gt;5.5&lt;/td&gt;
      &lt;td&gt;4.1&lt;/td&gt;
      &lt;td&gt;7.2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share by Rain Probability&lt;/td&gt;
      &lt;td&gt;2.3&lt;/td&gt;
      &lt;td&gt;3.3&lt;/td&gt;
      &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The paper’s replication data is &lt;a href=&quot;https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/AVTED4&amp;amp;faces-redirect=true&quot;&gt;publicly available&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;failed-placebo-tests&quot;&gt;Failed placebo tests&lt;/h3&gt;

&lt;p&gt;Earlier, I discussed how Madestam et al. (2013) performed &lt;a href=&quot;#placebo-tests&quot;&gt;“placebo tests”&lt;/a&gt; to check that its model wouldn’t generate too many false positives. Klein Teeselink &amp;amp; Melios (2021) did the same, although with only nine placebo tests instead of 627:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/BLM-placebo-tests.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(May 25 was the beginning of the period in which the majority of protests happened.)&lt;/p&gt;

&lt;p&gt;This chart shows that Klein Teeselink &amp;amp; Melios’ version of the rainfall method &lt;strong&gt;did not establish causality&lt;/strong&gt;. The fortnight of April 29—a month before the protests started—showed nearly the same effect size as the May 25 period, and 6 out of 9 placebo periods had p-values less than 0.05. So either some confounding variable explains the association between protests and vote share, or the standard error is underestimated due to spatial autocorrelation (or something similar).&lt;/p&gt;

&lt;p&gt;The authors write&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;part of this association may be caused by serial correlation in weather patterns&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In other words, a rainy June often also means a rainy April or May, so rain in April/May might appear to affect protest outcomes because it’s correlated with rain in June. (And thus the model does establish causality.)&lt;/p&gt;

&lt;p&gt;That may be true, but I’m not confident in that explanation,&lt;sup id=&quot;fnref:49&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:49&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt; and therefore I can’t trust this model to establish causality. Therefore, &lt;strong&gt;I exclude the BLM protests from my meta-analysis.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;(But this does leave me wondering: How is it that rainfall shocks in April could predict vote changes in the 2020 presidential election?)&lt;/p&gt;

&lt;p&gt;It would be interesting to compare the publicly-available BLM data to the Earth Day data (see &lt;a href=&quot;#hungerman--moorthy-2023-on-earth-day&quot;&gt;below&lt;/a&gt;) to figure out why the Earth Day paper passed its placebo test but BLM did not. But that’s beyond the scope of this article.&lt;/p&gt;

&lt;h2 id=&quot;larreboure--gonzález-2021-on-the-womens-march&quot;&gt;Larreboure &amp;amp; González (2021) on the Women’s March&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://mlarreboure.com/womenmarch.pdf&quot;&gt;Larreboure &amp;amp; González (2021)&lt;/a&gt;&lt;sup id=&quot;fnref:14:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; attempted to use the rainfall method to predict whether the 2017 Women’s March affected how many votes went to woman candidates in the 2018 election. I say “attempted” because they found that rainfall did not predict Women’s March attendance. So instead, they used “weather shocks” to predict voting outcomes. These shocks were defined as a combination of weather-related factors that they chose using a &lt;a href=&quot;https://arxiv.org/abs/1012.1297&quot;&gt;LASSO&lt;/a&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt; regression model.&lt;/p&gt;

&lt;p&gt;I see no obvious problem with the “weather shocks” method, but I’m wary of adding more mathematical complexity. Complexity makes flaws harder to spot.&lt;/p&gt;

&lt;p&gt;Larreboure &amp;amp; González found that protests increased voter turnout and vote share to women for both Democratic and Republican candidates.&lt;/p&gt;

&lt;p&gt;The authors accounted for spatial autocorrelation by clustering standard errors at the state level. They included a robustness check where they adjusted for spatial autocorrelation using the method from &lt;a href=&quot;https://doi.org/10.1016/S0304-4076(98)00084-0&quot;&gt;Conley (1999)&lt;/a&gt;&lt;sup id=&quot;fnref:43&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:43&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt; with two different distance cutoffs, 50 km and 100 km (in Table A.8).&lt;/p&gt;

&lt;p&gt;This paper had at least two inconsistencies in its reported figures:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Page 13 says an additional 1% of the population protesting increased vote share for women and under-represented groups by 12.95 percentage points (pp). However, Table 4 on page 28 reports an increase of 12.70 pp.&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;A more minor error, but page 13 says the 12.95 pp number is “remarkably similar to the impact of the Tea Party protesters on the vote share of the Republican Party (i.e. 12.59)”. However, the 12.59 number from Madestam et al. (see &lt;a href=&quot;#madestam-et-al-2013-on-tea-party-protests&quot;&gt;above&lt;/a&gt;) is the change in &lt;em&gt;absolute votes&lt;/em&gt;, not vote &lt;em&gt;share&lt;/em&gt;. The reported change in vote &lt;em&gt;share&lt;/em&gt; was 18.81, which is not remarkably similar to 12.95.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will take the 12.70 number reported in Table 4 as correct (it is repeated again in the robustness checks).&lt;/p&gt;

&lt;p&gt;To be conservative, in my meta-analysis I will use the figures from the 50 km robustness check (where available) because they had the largest standard errors.&lt;/p&gt;

&lt;h2 id=&quot;hungerman--moorthy-2023-on-earth-day&quot;&gt;Hungerman &amp;amp; Moorthy (2023) on Earth Day&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.aeaweb.org/content/file?id=16104&quot;&gt;Hungerman &amp;amp; Moorthy (2023)&lt;/a&gt;&lt;sup id=&quot;fnref:36:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; found that rainfall on the inaugural Earth Day, April 22, 1970, could predict people’s environmental attitudes on surveys from 1977 to 1993.&lt;/p&gt;

&lt;p&gt;It also directly measured environmental impact by looking at pollutant levels and rates of birth defects (which can result from exposure to environmental contaminants). It found that rainfall on Earth Day could predict birth defects.&lt;/p&gt;

&lt;p&gt;The paper claims that rainfall predicted carbon monoxide levels, and it did find a statistically significant change. However, Appendix Table A3 examines five environmental contaminants, of which only carbon monoxide had a t-stat above 2, and two out of five outcomes were (slightly) negative. The positive effect on carbon monoxide may be a false positive.&lt;/p&gt;

&lt;p&gt;Earlier I showed this chart:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Earth-Day-birth-defects.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The chart shows that rainfall on April 22, 1970–Earth Day—predicts the rate of birth defects 10 years later, but rainfall on any other day does not.&lt;/p&gt;

&lt;p&gt;The same chart for the effect of rainfall on support for environmental spending:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Earth-Day-rain.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The paper addresses the previously-mentioned spatial autocorrelation problem using the same techniques as Madestam et al. (2013). If spatial autocorrelation were distorting the effect sizes, we would expect to see more spurious statistically significant outcomes on the charts above. But we only see large effect sizes on Earth Day, not on any other day, which indicates that spatial autocorrelation is not a problem.&lt;/p&gt;

&lt;p&gt;Like Madestam et al. (2013), the authors generated hundreds of additional “placebo tests” (as described &lt;a href=&quot;#placebo-tests&quot;&gt;above&lt;/a&gt;) where they looked at how well rainfall on different random days could predict environmental outcomes. They found that the placebo p-values were very similar to the original p-values (and even lower in some cases):&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Earth-Day-Figure-6.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The paper’s source code and data are &lt;a href=&quot;https://doi.org/10.3886/E144941V1&quot;&gt;publicly available&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;meta-analysis&quot;&gt;Meta-analysis&lt;/h1&gt;

&lt;p&gt;For two of the five natural experiments, I calculated expected change in number of votes for each additional protester, or change in vote share per protester (defined as votes per protester divided by turnout):&lt;/p&gt;

&lt;div id=&quot;table-2&quot; style=&quot;text-align:center;&quot;&gt;Table 2: Change in Votes Per Protester&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Protest&lt;/th&gt;
      &lt;th&gt;Votes&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;Vote Share&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;n&lt;/th&gt;
      &lt;th&gt;Source&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;12.59&lt;/td&gt;
      &lt;td&gt;4.21&lt;/td&gt;
      &lt;td&gt;18.81&lt;/td&gt;
      &lt;td&gt;7.85&lt;/td&gt;
      &lt;td&gt;2758&lt;/td&gt;
      &lt;td&gt;Table VI&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Women’s March&lt;/td&gt;
      &lt;td&gt;3*&lt;/td&gt;
      &lt;td&gt;**&lt;/td&gt;
      &lt;td&gt;9.62&lt;/td&gt;
      &lt;td&gt;4.47&lt;/td&gt;
      &lt;td&gt;2936&lt;/td&gt;
      &lt;td&gt;Table A.8 and page 3&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;*only one significant figure was provided&lt;/p&gt;

&lt;p&gt;**not reported&lt;/p&gt;

&lt;p&gt;I did not include the Earth Day or Civil Rights protests because the studies did not provide the relevant data.&lt;sup id=&quot;fnref:57&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:57&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt; The BLM study reported vote share per protester, but I excluded it due to the study’s failure to establish causality, discussed &lt;a href=&quot;#failed-placebo-tests&quot;&gt;previously&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I aggregated the results by applying a &lt;a href=&quot;https://de.meta-analysis.com/download/Intro_Models.pdf&quot;&gt;random-effects model&lt;/a&gt;. According to these two studies, protests have a mean impact of +11.95 vote share per protester (standard error 4.00; likelihood ratio 87.1; p &amp;lt; 0.003).&lt;/p&gt;

&lt;p&gt;(The &lt;a href=&quot;https://arbital.greaterwrong.com/p/likelihoods_not_pvalues/&quot;&gt;likelihood ratio&lt;/a&gt; tells us how much evidence the data provides. A likelihood ratio of 10.3 means that, assuming the study’s methodology is perfect, the odds of getting this result are 10.3x higher if the true mean is 7.84 than if the true mean is 0.)&lt;/p&gt;

&lt;p&gt;If we are considering supporting some upcoming protest, we might want to estimate the probability that it will backfire. One way to do that is by using the pooled sample of past protests.&lt;/p&gt;

&lt;p&gt;This pooled sample has a between-study standard deviation of 1.19, which reflects how much the effectiveness of protests varied across the studies. If we assume that the sample’s mean and between-study variation are exactly correct (which is questionable, since the pool only includes two studies), then we can model protest outcomes as a normal distribution with a mean of 11.95 and a standard deviation of 1.19.&lt;/p&gt;

&lt;p&gt;Under this model, the probability of a protest having a negative effect—i.e., producing a value less than zero—is extremely small. But I would not take these precise numbers too seriously.&lt;/p&gt;

&lt;p&gt;Vote share per protester is the most interesting metric for my purposes because it gives information about cost-effectiveness—it tells you how much impact you can expect for each marginal protester. But the natural experiments reported on other outcomes as well, such as overall change in vote share (as determined by changes in rainfall) and popular support for protesters’ objectives.&lt;/p&gt;

&lt;p&gt;I applied a random-effects model to aggregate a few different sets of outcomes:&lt;/p&gt;

&lt;div id=&quot;table-3&quot; style=&quot;text-align:center;&quot;&gt;Table 3: Pooled Sample Outcomes&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Outcomes&lt;/th&gt;
      &lt;th&gt;Mean&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;likelihood ratio&lt;/th&gt;
      &lt;th&gt;p-value&lt;/th&gt;
      &lt;th&gt;P(negative effect)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share Per Protester&lt;/td&gt;
      &lt;td&gt;11.95&lt;/td&gt;
      &lt;td&gt;4.00&lt;/td&gt;
      &lt;td&gt;87.1&lt;/td&gt;
      &lt;td&gt;0.003&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share&lt;/td&gt;
      &lt;td&gt;1.59&lt;/td&gt;
      &lt;td&gt;0.48&lt;/td&gt;
      &lt;td&gt;257&lt;/td&gt;
      &lt;td&gt;0.001&lt;/td&gt;
      &lt;td&gt;0.002&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share (Rain Only)&lt;/td&gt;
      &lt;td&gt;1.14&lt;/td&gt;
      &lt;td&gt;0.42&lt;/td&gt;
      &lt;td&gt;39.3&lt;/td&gt;
      &lt;td&gt;0.007&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Single Hypothesis&lt;/td&gt;
      &lt;td&gt;1.06&lt;/td&gt;
      &lt;td&gt;0.78&lt;/td&gt;
      &lt;td&gt;2.55&lt;/td&gt;
      &lt;td&gt;0.172&lt;/td&gt;
      &lt;td&gt;0.199&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Favorability&lt;/td&gt;
      &lt;td&gt;2.68&lt;/td&gt;
      &lt;td&gt;2.32&lt;/td&gt;
      &lt;td&gt;1.95&lt;/td&gt;
      &lt;td&gt;0.249&lt;/td&gt;
      &lt;td&gt;0.176&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;ul&gt;
  &lt;li&gt;Row 1 – Vote Share Per Protester uses the pooled outcome that I described in &lt;a href=&quot;#table-2&quot;&gt;Table 2&lt;/a&gt;, including Tea Party and Women’s March vote share per protester.&lt;/li&gt;
  &lt;li&gt;Row 2 – Vote Share takes these outcomes from the studies on nonviolent protests:
    &lt;ul&gt;
      &lt;li&gt;Tea Party – Republican vote share&lt;/li&gt;
      &lt;li&gt;Women’s March – women’s vote share&lt;/li&gt;
      &lt;li&gt;Earth Day – favorability (1)&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt; as a proxy for vote share&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Row 3 – Vote Share (Rain Only) uses the same outcomes as Row 2, but excluding the Women’s March outcome because it used weather shocks rather than rainfall.&lt;/li&gt;
  &lt;li&gt;Row 4 – Single Hypothesis does not differentiate between nonviolent and violent protests, instead lumping all studies together. It includes the three Vote Share measures from Row 2, plus Civil Rights – vote share.&lt;/li&gt;
  &lt;li&gt;Row 5 – Favorability includes measured changes in popular support for a protest’s goals:
    &lt;ul&gt;
      &lt;li&gt;Tea Party – support for the Tea Party&lt;/li&gt;
      &lt;li&gt;Earth Day – favorability (1)&lt;sup id=&quot;fnref:21:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(Note: &lt;code&gt;P(negative effect) = 0&lt;/code&gt; doesn’t mean it’s &lt;em&gt;literally&lt;/em&gt; zero, but it’s so small that it gets rounded off to zero.)&lt;/p&gt;

&lt;p&gt;The Women’s March and Earth Day papers used continuous rainfall variables instead of binary (rain vs. no rain); those papers’ outcomes were standardized using the method from &lt;a href=&quot;/materials/gelman2008.pdf&quot;&gt;Gelman (2007)&lt;/a&gt;&lt;sup id=&quot;fnref:58&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:58&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt; to put them on the same scale as binary variables.&lt;sup id=&quot;fnref:59&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:59&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The Vote Share Per Protester and Vote Share tests produce low p-values/high likelihood ratios, and under those models, nonviolent protests have virtually no chance of having a negative effect on support. Favorability has a weak likelihood ratio due to a large variance between outcomes.&lt;sup id=&quot;fnref:60&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:60&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;26&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Under the Single Hypothesis model, protests have a much weaker p-value/likelihood ratio. Naturally, when you include a negative outcome, it pulls down the average effect quite a bit. The mean is still positive, which makes sense given that only one out of four included protests was violent.&lt;/p&gt;

&lt;p&gt;Is it fair to separate out violent and nonviolent protests? I’m wary of adding complexity to a hypothesis but I believe it’s justified in this case:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It’s intuitively plausible that peaceful protests would earn support while violence would backfire.&lt;/li&gt;
  &lt;li&gt;Lab experiments&lt;sup id=&quot;fnref:50:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:50&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; and observational studies support this bimodal hypothesis.&lt;/li&gt;
  &lt;li&gt;I ran a t-test for the hypothesis that nonviolent and violent protests have the same effect on voting outcomes, comparing the pooled outcome from Row 2 – Vote Share against the Civil Rights protest outcome. The result had a likelihood ratio of 55.1 and p &amp;lt; 0.005. We can strongly reject the hypothesis that these two samples have the same mean.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are some reasons believe these results may be overstated, which I will address under &lt;a href=&quot;#potential-problems-with-the-research&quot;&gt;Potential problems with the research&lt;/a&gt;. There are also at least two reasons to believe they may be understated:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Rainfall does not perfectly predict whether protests occur. (Sometimes people protest in the rain.) If protests genuinely work, then the effect of protests will be larger than the effect of protests &lt;em&gt;as predicted by rainfall.&lt;/em&gt;&lt;/li&gt;
  &lt;li&gt;I aggregated the most similar metrics into pooled outcomes. But these were not always the strongest metrics. For example, Earth Day protests strongly predicted birth defects (likelihood ratio 55,000; p &amp;lt; 3e-6). But I did not include birth defects in the meta-analysis because it did not have any comparable counterpart in the other studies.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Table 4 shows outcomes across the five studies, estimated by looking at counties where it rained vs. did not rain. The Women’s March and Earth Day results are standardized as explained above.&lt;sup id=&quot;fnref:59:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:59&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;div id=&quot;table-4&quot; style=&quot;text-align:center;&quot;&gt;Table 4: Societal-Level Protest Outcomes&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Protest&lt;/th&gt;
      &lt;th&gt;Outcome&lt;/th&gt;
      &lt;th&gt;Change&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;Source&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;votes (as % of population)&lt;/td&gt;
      &lt;td&gt;1.04%**&lt;/td&gt;
      &lt;td&gt;0.30%&lt;/td&gt;
      &lt;td&gt;Table VI&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;vote share&lt;/td&gt;
      &lt;td&gt;1.55%*&lt;/td&gt;
      &lt;td&gt;0.69%&lt;/td&gt;
      &lt;td&gt;Table VI&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;conservative vote score&lt;sup id=&quot;fnref:30&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;1.922*&lt;/td&gt;
      &lt;td&gt;0.937&lt;/td&gt;
      &lt;td&gt;Table VII&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;average belief effect&lt;sup id=&quot;fnref:31&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:31&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;28&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;0.13***&lt;/td&gt;
      &lt;td&gt;0.037&lt;/td&gt;
      &lt;td&gt;Table V&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;strongly supports Tea Party&lt;/td&gt;
      &lt;td&gt;5.7%*&lt;/td&gt;
      &lt;td&gt;2.5%&lt;/td&gt;
      &lt;td&gt;Table V&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;Sarah Palin favorability&lt;/td&gt;
      &lt;td&gt;5.7%*&lt;/td&gt;
      &lt;td&gt;2.6%&lt;/td&gt;
      &lt;td&gt;Table V&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;“outraged about way things are going in country”&lt;/td&gt;
      &lt;td&gt;4.6%*&lt;/td&gt;
      &lt;td&gt;2.1%&lt;/td&gt;
      &lt;td&gt;Table V&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;opposes raising taxes on income &amp;gt;$250K&lt;/td&gt;
      &lt;td&gt;5.8%&lt;/td&gt;
      &lt;td&gt;3.0%&lt;/td&gt;
      &lt;td&gt;Table V&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;“Americans have less freedom than in 2008”&lt;/td&gt;
      &lt;td&gt;6.5%*&lt;/td&gt;
      &lt;td&gt;2.6%&lt;/td&gt;
      &lt;td&gt;Table V&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Tea Party&lt;/td&gt;
      &lt;td&gt;Obama unfavorability&lt;/td&gt;
      &lt;td&gt;4.6%&lt;/td&gt;
      &lt;td&gt;2.4%&lt;/td&gt;
      &lt;td&gt;Table V&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Civil Rights (violent)&lt;/td&gt;
      &lt;td&gt;vote share among white voters&lt;/td&gt;
      &lt;td&gt;–5.56%*&lt;/td&gt;
      &lt;td&gt;2.48%&lt;/td&gt;
      &lt;td&gt;Appendix, Table 12&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;BLM&lt;/td&gt;
      &lt;td&gt;vote share&lt;/td&gt;
      &lt;td&gt;2.7%&lt;/td&gt;
      &lt;td&gt;1.2%&lt;/td&gt;
      &lt;td&gt;Table 2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;BLM&lt;/td&gt;
      &lt;td&gt;“Blacks should not receive special favors”&lt;/td&gt;
      &lt;td&gt;–0.242&lt;sup id=&quot;fnref:33&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:33&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;29&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;0.360&lt;/td&gt;
      &lt;td&gt;Table 3&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;BLM&lt;/td&gt;
      &lt;td&gt;“Slavery caused current disparities”&lt;/td&gt;
      &lt;td&gt;0.339&lt;sup id=&quot;fnref:33:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:33&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;29&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;0.388&lt;/td&gt;
      &lt;td&gt;Table 3&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Women’s March&lt;/td&gt;
      &lt;td&gt;women’s vote share&lt;/td&gt;
      &lt;td&gt;2.48%***&lt;/td&gt;
      &lt;td&gt;0.64%&lt;/td&gt;
      &lt;td&gt;Table 4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Women’s March&lt;/td&gt;
      &lt;td&gt;voter turnout&lt;/td&gt;
      &lt;td&gt;0.41%**&lt;/td&gt;
      &lt;td&gt;0.14%&lt;/td&gt;
      &lt;td&gt;Table 4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Earth Day&lt;/td&gt;
      &lt;td&gt;favorability (1)&lt;sup id=&quot;fnref:21:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;0.90%&lt;/td&gt;
      &lt;td&gt;0.53%&lt;/td&gt;
      &lt;td&gt;Table 2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Earth Day&lt;/td&gt;
      &lt;td&gt;favorability (1)&lt;sup id=&quot;fnref:21:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt; among under-20s&lt;/td&gt;
      &lt;td&gt;1.67%**&lt;/td&gt;
      &lt;td&gt;0.62%&lt;/td&gt;
      &lt;td&gt;Table 2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Earth Day&lt;/td&gt;
      &lt;td&gt;favorability (2)&lt;sup id=&quot;fnref:21:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;1.12&lt;/td&gt;
      &lt;td&gt;0.70&lt;/td&gt;
      &lt;td&gt;Table 2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Earth Day&lt;/td&gt;
      &lt;td&gt;favorability (2)&lt;sup id=&quot;fnref:21:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt; among under-20s&lt;/td&gt;
      &lt;td&gt;1.90*&lt;/td&gt;
      &lt;td&gt;0.82&lt;/td&gt;
      &lt;td&gt;Table 2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Earth Day&lt;/td&gt;
      &lt;td&gt;carbon monoxide&lt;/td&gt;
      &lt;td&gt;0.07*&lt;/td&gt;
      &lt;td&gt;0.03&lt;/td&gt;
      &lt;td&gt;Table 4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Earth Day&lt;/td&gt;
      &lt;td&gt;birth defects&lt;/td&gt;
      &lt;td&gt;1.00***&lt;/td&gt;
      &lt;td&gt;0.21&lt;/td&gt;
      &lt;td&gt;Table 4&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;*p &amp;lt; 0.05; **p &amp;lt; 0.01; ***p &amp;lt; 0.001&lt;/p&gt;

&lt;h1 id=&quot;potential-problems-with-the-research&quot;&gt;Potential problems with the research&lt;/h1&gt;

&lt;h2 id=&quot;spatial-autocorrelation&quot;&gt;Spatial autocorrelation&lt;/h2&gt;

&lt;p&gt;Recall that “spatial autocorrelation” is a technical way of saying “rainfall is not independent across counties”. If you assume your samples are independent when they’re not, your standard errors will be too low—giving you too much confidence in your results.&lt;/p&gt;

&lt;p&gt;It’s conceivable that all five studies overstated the strength of their results due to spatial autocorrelation.&lt;/p&gt;

&lt;p&gt;Each study on nonviolent protests used at least some technique to correct for spatial autocorrelation. &lt;a href=&quot;#madestam-et-al-2013-on-tea-party-protests&quot;&gt;Madestam et al. (2013)&lt;/a&gt; and &lt;a href=&quot;#hungerman--moorthy-2023-on-earth-day&quot;&gt;Hungerman &amp;amp; Moorthy (2023)&lt;/a&gt; included “placebo tests”. The placebo tests from Madestam et al. (2013) indicated that these corrections mostly worked but did not fully succeed, whereas Hungerman &amp;amp; Moorthy’s corrections apparently did succeed. On balance, this suggests that the standard errors of the pooled outcome may be understated, but probably not by a large margin.&lt;/p&gt;

&lt;p&gt;Two of the pooled outcomes from &lt;a href=&quot;#table-3&quot;&gt;Table 3&lt;/a&gt;—the Vote Share Per Protester and Favorability pools—had strong likelihood ratios / low p-values. That suggests they should hold up even with somewhat reduced statistical power.&lt;/p&gt;

&lt;h2 id=&quot;publication-bias&quot;&gt;Publication bias&lt;/h2&gt;

&lt;p&gt;The standard method to assess &lt;a href=&quot;https://en.wikipedia.org/wiki/Publication_bias&quot;&gt;publication bias&lt;/a&gt; would be to make a &lt;a href=&quot;https://en.wikipedia.org/wiki/Funnel_plot&quot;&gt;funnel plot&lt;/a&gt;. I didn’t do that for two reasons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;With only five studies (at best), there aren’t enough data points to detect publication bias even if it exists.&lt;/li&gt;
  &lt;li&gt;A funnel plot only works if your studies cover a range of sample sizes. All the natural experiments have roughly the same sample size (because they all look at county-level data for the majority of US counties).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As an alternative, I tested how the results might change if we discovered some unpublished null results. I used the following procedure:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Choose one of the pooled outcomes from &lt;a href=&quot;#table-3&quot;&gt;Table 3&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;For each individual study outcome, clone it to create a “dummy null outcome” with the same standard error and sample size, but a mean of 0. This represents a hypothetical study that didn’t get published because it found a null result.&lt;/li&gt;
  &lt;li&gt;Construct a larger pooled sample using all four or six outcomes (the two or three real outcomes plus the two or three null dummies).&lt;/li&gt;
&lt;/ol&gt;

&lt;div id=&quot;table-5&quot; style=&quot;text-align:center;&quot;&gt;Table 5: Pooled Sample Effects, Adjusted for Publication Bias&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Outcomes&lt;/th&gt;
      &lt;th&gt;Mean&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;likelihood ratio&lt;/th&gt;
      &lt;th&gt;p-value&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share Per Protester&lt;/td&gt;
      &lt;td&gt;6.43&lt;/td&gt;
      &lt;td&gt;4.01&lt;/td&gt;
      &lt;td&gt;3.61&lt;/td&gt;
      &lt;td&gt;0.11&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share&lt;/td&gt;
      &lt;td&gt;0.80&lt;/td&gt;
      &lt;td&gt;0.41&lt;/td&gt;
      &lt;td&gt;6.97&lt;/td&gt;
      &lt;td&gt;0.049&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share (Rain Only)&lt;/td&gt;
      &lt;td&gt;0.58&lt;/td&gt;
      &lt;td&gt;0.36&lt;/td&gt;
      &lt;td&gt;3.77&lt;/td&gt;
      &lt;td&gt;0.104&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Favorability&lt;/td&gt;
      &lt;td&gt;0.74&lt;/td&gt;
      &lt;td&gt;0.65&lt;/td&gt;
      &lt;td&gt;1.91&lt;/td&gt;
      &lt;td&gt;0.256&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Adding in null results considerably weakens the strength of evidence.&lt;/p&gt;

&lt;p&gt;This approach is deliberately conservative. I wouldn’t say this meta-analysis is robust to publication bias, but it’s not particularly vulnerable to publication bias, either.&lt;/p&gt;

&lt;p&gt;(The dummy-null approach leaves something to be desired. If the true mean were 6.43 as the pooled sample suggests, it would be surprising to see three positive results with low p-values plus three null results with equally tight standard errors. But I haven’t thought of any better ideas for how to test publication bias.)&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/materials/Protest Meta-Analysis.pdf&quot;&gt;Orazani et al. (2021)&lt;/a&gt;&lt;sup id=&quot;fnref:50:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:50&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; reviewed lab experiments on protest favorability. Among other things, it looked at publication bias. This paper might be informative, since it stands to reason that if experimental researchers on protests have a certain bias, then sociological researchers might have a similar bias.&lt;/p&gt;

&lt;p&gt;The paper included a funnel plot:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Orazani-funnel-plot.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;To supplement the plot, I tested for publication bias using two statistical tests:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://training.cochrane.org/resource/identifying-publication-bias-meta-analyses-continuous-outcomes&quot;&gt;Egger’s regression test&lt;/a&gt;&lt;sup id=&quot;fnref:51&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:51&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;30&lt;/a&gt;&lt;/sup&gt; found r = 0.124, p &amp;lt; 0.646 (r &amp;gt; 0 means that more powerful studies had &lt;em&gt;larger&lt;/em&gt; mean effects, which if anything is evidence of inverse publication bias).&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient#Hypothesis_test&quot;&gt;Kendall’s tau test&lt;/a&gt; found p &amp;lt; 0.565.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Orazani et al. (2021) included 14 experiments and 2 non-experimental studies. I also tested for publication bias when excluding the non-experiments and again found highly insignificant p-values.&lt;/p&gt;

&lt;p&gt;Orazani et al. (2021) tested for a difference between published and unpublished studies (although they defined “unpublished” in a way that seemed strange to me—they counted dissertations and conference presentations as unpublished). They found a significant difference in effect size, suggesting the presence of publication bias. Published studies had a &lt;a href=&quot;https://en.wikipedia.org/wiki/Effect_size#Cohen&apos;s_d&quot;&gt;Cohen’s d&lt;/a&gt; of 0.39, versus 0.22 for unpublished studies. However, this difference disappeared when the authors controlled for certain features of the protests being studied (e.g. protests directed at the government as opposed to society). I am not sure what to make of this, but there is at least &lt;em&gt;some&lt;/em&gt; evidence of publication bias.&lt;/p&gt;

&lt;h2 id=&quot;data-fabrication&quot;&gt;Data fabrication&lt;/h2&gt;

&lt;p&gt;Most meta-analyses do not consider the possibility that some studies’ data might be fabricated, and I believe they should. Checking for fraud is difficult in general, but I will do some basic checks.&lt;/p&gt;

&lt;p&gt;When humans fabricate data, they often come up with numbers that don’t look random. Real data should follow two observable patterns:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The last digits of numbers should be uniformly distributed.&lt;/li&gt;
  &lt;li&gt;The first digits of numbers should NOT be uniformly distributed. Instead, they should obey &lt;a href=&quot;https://en.wikipedia.org/wiki/Benford&apos;s_law&quot;&gt;Benford’s law&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I tested for suspicious patterns by collecting a list of statistical results (means and standard errors for various outcomes) from the BLM, Tea Party, Women’s March, and Earth Day papers. I did not include the Civil Rights paper because its quasi-experimental data only included violent protests.&lt;/p&gt;

&lt;p&gt;I also did a power check to determine whether the tests have adequate statistical power. We should be able to reject the hypotheses that the &lt;em&gt;first&lt;/em&gt; digits follow a uniform distribution, and that the &lt;em&gt;last&lt;/em&gt; digits follow Benford’s law.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;  Tea Party:
      First-digit Benford&apos;s Law p-value: 0.598
      Last-digit uniformity p-value:     0.306
      Power check p-values:              0.001, 0.002

  BLM:
      First-digit Benford&apos;s Law p-value: 0.438
      Last-digit uniformity p-value:     0.598
      Power check p-values:              0.001, 0.001

  Women&apos;s March:
      First-digit Benford&apos;s Law p-value: 0.181
      Last-digit uniformity p-value:     0.891
      Power check p-values:              0.001, 0.001

  Earth Day:
      First-digit Benford&apos;s Law p-value: 0.121
      Last-digit uniformity p-value:     0.224
      Power check p-values:              0.038, 0.001
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;(P-values are rounded up to 3 digits. See &lt;a href=&quot;#source-code&quot;&gt;source code&lt;/a&gt; for full details.)&lt;/p&gt;

&lt;p&gt;In all cases, I found high p-values for the first and last digits, which means the data follow the expected natural patterns. And I found very low p-values for the sanity check tests, which means the tests are sufficiently powerful (except for Earth Day first digits, where few independent outcomes were reported).&lt;/p&gt;

&lt;p&gt;These tests do not rule out more sophisticated fraud. For example, if the authors generated false data and then calculated statistical tests on top of them, the fabricated results would still pass the first-digit and last-digit checks.&lt;/p&gt;

&lt;h2 id=&quot;data-errors&quot;&gt;Data errors&lt;/h2&gt;

&lt;p&gt;Checking for data errors is difficult in general.&lt;sup id=&quot;fnref:54&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:54&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;31&lt;/a&gt;&lt;/sup&gt; I did a basic consistency check to verify that each study’s reported means and standard errors seemed internally consistent, but it’s hard to see errors that way.&lt;/p&gt;

&lt;p&gt;The only data error I noticed was in Larreboure &amp;amp; González (2021)&lt;sup id=&quot;fnref:14:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;. As I mentioned before, it reported inconsistent numbers for the change in vote share based on each 1% of the population protesting: 12.95 pp (std err 5.63) on page 13 in the text, and 12.70 pp (std err 5.48) in Table 4.&lt;/p&gt;

&lt;p&gt;The difference is small, which suggests the authors may have made some revision to their calculations but didn’t update all the values reported in their manuscript. If so, the number in Table 4 is likely the correct one.&lt;sup id=&quot;fnref:52&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:52&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;32&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This inconsistency suggests that the authors have some degrees of freedom for &lt;a href=&quot;https://en.wikipedia.org/wiki/Data_dredging&quot;&gt;p-hacking&lt;/a&gt;, but the two numbers are similar enough to have minimal impact on the result of my meta-analysis.&lt;/p&gt;

&lt;h2 id=&quot;will-the-results-generalize&quot;&gt;Will the results generalize?&lt;/h2&gt;

&lt;p&gt;All the protests covered by natural experiments have certain commonalities:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;They all had a large number of participants.&lt;/li&gt;
  &lt;li&gt;They were all nationwide (they had to be, so the study authors could use county-level data).&lt;/li&gt;
  &lt;li&gt;They all took place in the United States.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Will the results generalize to other countries? Will the results generalize to smaller-scale or local protests?&lt;/p&gt;

&lt;p&gt;The fact that these protests were so widespread means their objectives couldn’t have been far outside the &lt;a href=&quot;https://en.wikipedia.org/wiki/Overton_window&quot;&gt;Overton window&lt;/a&gt; (i.e., the range of politically acceptable ideas at the time). Perhaps a protest that advocated for a more radical position would be more likely to backfire. To address this question, perhaps we could look at lab experiments on protests, but that’s beyond the scope of this article.&lt;/p&gt;

&lt;h2 id=&quot;meta-concerns-with-this-meta-analysis&quot;&gt;Meta-concerns with this meta-analysis&lt;/h2&gt;

&lt;p&gt;I have some criticisms of my meta-analysis itself:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;I did not pre-register a methodology. I have limited experience conducting meta-analyses and I was learning as I wrote this article. Realistically, I would not have had the motivation to finish if I’d been required to fully determine a methodology in advance. But the platonic ideal of this meta-analysis would have included a pre-registration.&lt;/li&gt;
  &lt;li&gt;Three of the studies (BLM, Civil Rights, and Earth Day) published their data. A thorough analysis would attempt to replicate those studies’ findings. I did not do that.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1 id=&quot;are-social-change-labs-claims-justified&quot;&gt;Are Social Change Lab’s claims justified?&lt;/h1&gt;

&lt;h2 id=&quot;broad-claims&quot;&gt;Broad claims&lt;/h2&gt;

&lt;p&gt;Social Change Lab’s literature review included a summary of findings, reproduced below.&lt;/p&gt;

&lt;div id=&quot;table-6&quot; style=&quot;text-align:center;&quot;&gt;Table 6: Social Change Lab Findings&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Finding&lt;/th&gt;
      &lt;th&gt;Confidence&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Protest movements can have significant short-term impacts&lt;/td&gt;
      &lt;td&gt;Strong&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Protest movements can achieve intended outcomes in North America and Western Europe&lt;/td&gt;
      &lt;td&gt;Strong&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Protest movements can have significant impacts (2-5% shifts) on voting behaviour and electoral outcomes&lt;/td&gt;
      &lt;td&gt;Medium&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Protest movements can positively influence public opinion (≤10% shifts)&lt;/td&gt;
      &lt;td&gt;Medium&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Protest movements can influence public discourse (e.g. issue salience and media narratives)&lt;/td&gt;
      &lt;td&gt;Medium&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Protest movements can influence policy&lt;/td&gt;
      &lt;td&gt;Low (mixed evidence)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Protest movements can influence policymaker beliefs&lt;/td&gt;
      &lt;td&gt;Low (little evidence)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Protest movements can achieve desired outcomes in the Global South&lt;/td&gt;
      &lt;td&gt;Low (little evidence)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Protest movements can have significant long-term impacts (on public opinion and public discourse)&lt;/td&gt;
      &lt;td&gt;Low (little evidence)&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;In this section, I assess whether the natural experiments support each “Strong” and “Medium” claim. I find that the evidence does indeed support the findings and I agree with Social Change Lab’s confidence levels in each case.&lt;/p&gt;

&lt;p&gt;I do not review the four “Low Confidence” claims because none of the natural experiments attempted to test them. (That fact itself suggests that “Low Confidence” is an accurate label.)&lt;/p&gt;

&lt;p&gt;Starting with the findings rated “Strong”:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;table&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;Protest movements can have significant short-term impacts&lt;/td&gt;
        &lt;td&gt;&lt;strong&gt;Strong&lt;/strong&gt;&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
        &lt;td&gt;Protest movements can achieve intended outcomes in North America and Western Europe&lt;/td&gt;
        &lt;td&gt;&lt;strong&gt;Strong&lt;/strong&gt;&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;
&lt;/blockquote&gt;

&lt;p&gt;The natural experiments support these claims. There’s also supporting evidence from lab experiments on how protests affect people’s perceptions; studies on media coverage; and observational data on protest outcomes. For a meta-analysis of lab experiments, which I view as the second-strongest form of evidence, see &lt;a href=&quot;/materials/Protest Meta-Analysis.pdf&quot;&gt;Orazani et al. (2021)&lt;/a&gt;&lt;sup id=&quot;fnref:50:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:50&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;I do not have much confidence in most of these lines of evidence, but the natural experiments offer good support:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;All study results point the same direction (as long as we exclude the data on violent protests).&lt;/li&gt;
  &lt;li&gt;The &lt;a href=&quot;#table-3&quot;&gt;pooled outcomes&lt;/a&gt; have high likelihood ratios / low p-values.&lt;/li&gt;
  &lt;li&gt;There are no signs of &lt;a href=&quot;#data-fabrication&quot;&gt;data fabrication&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m still concerned about &lt;a href=&quot;#publication-bias&quot;&gt;publication bias&lt;/a&gt; and &lt;a href=&quot;#spatial-autocorrelation&quot;&gt;spatial autocorrelation&lt;/a&gt;. I am not sure it is appropriate to describe the evidence as “Strong”. It would be fair to downgrade your confidence to “Medium” based on these concerns. But I also think “Strong” confidence is defensible; the distinction depends on how much weight you give to the hard-to-quantify limitations with the existing evidence.&lt;/p&gt;

&lt;p&gt;The natural experiments all cover nationwide, popular protests in the United States, so it’s not clear that the results &lt;a href=&quot;#will-the-results-generalize&quot;&gt;generalize&lt;/a&gt;. Regardless, Social Change Lab didn’t claim that protests &lt;em&gt;always&lt;/em&gt; have significant impacts, only that they “can” have impact; and the existence of these natural experiments shows that indeed they can.&lt;/p&gt;

&lt;p&gt;The highest-quality studies are all natural experiments, not true experiments. A true experiment would be preferable. But the rainfall method seems sufficient to establish causality so I am comfortable treating these natural experiments’ methodologies as valid.&lt;/p&gt;

&lt;p&gt;Whether this evidence qualifies as “strong” is a matter of debate. Certainly the evidence could be much stronger. But I would be surprised if these findings were overturned, so I think Social Change Lab’s confidence level is fair.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;table&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;Protest movements can have significant impacts (2-5% shifts) on voting behaviour and electoral outcomes&lt;/td&gt;
        &lt;td&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;
&lt;/blockquote&gt;

&lt;p&gt;A 2–5% shift is consistent with the natural experiments, which found changes in vote share ranging from 1.55% to 5.54% (see &lt;a href=&quot;#table-4&quot;&gt;Table 4&lt;/a&gt;). I think 2–5% is fair as an optimistic expectation, given that the natural experiments all covered large nationwide protests.&lt;/p&gt;

&lt;p&gt;I believe the rainfall method is effective at establishing causality, but we can’t be too confident in the magnitude of the effect because rainfall does not perfectly predict protest attendance. So I would not rate the confidence for this finding as higher than “Medium”.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;table&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;Protest movements can positively influence public opinion (≤10% shifts)&lt;/td&gt;
        &lt;td&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;
&lt;/blockquote&gt;

&lt;p&gt;Among the natural experiments, only two (Madestam et al. 2013; Hungerman &amp;amp; Moorthy 2023) reported on public opinion in terms of percentages. Public opinion changes clustered around 5% for the multiple measures in the two studies.&lt;/p&gt;

&lt;p&gt;Klein Teeselink &amp;amp; Melios (2021) reported changes in public opinion on a 5-point scale. Rainfall predicted changes of 0.242 and 0.339 on two different questions, which correspond to percentage changes of about 6% and 8.5%, although the interpretation of these percentages isn’t the same as for the other two studies.&lt;/p&gt;

&lt;p&gt;I believe the data on voter behavior also provides evidence on public opinion—if you vote differently, it’s most likely because your opinion changed.&lt;/p&gt;

&lt;p&gt;So I think Social Change Lab’s finding is indeed moderately well supported.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;table&gt;
    &lt;tbody&gt;
      &lt;tr&gt;
        &lt;td&gt;Protest movements can influence public discourse (e.g. issue salience and media narratives)&lt;/td&gt;
        &lt;td&gt;&lt;strong&gt;Medium&lt;/strong&gt;&lt;/td&gt;
      &lt;/tr&gt;
    &lt;/tbody&gt;
  &lt;/table&gt;
&lt;/blockquote&gt;

&lt;p&gt;None of the natural experiments directly addressed this claim.&lt;sup id=&quot;fnref:35&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:35&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;33&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Several observational studies found that protests frequently get media coverage. Even though the studies are all observational, I am comfortable inferring causality in this case—it seems odd to say that protests occurred, the news covered the protests, but the protests did not cause the news coverage.&lt;/p&gt;

&lt;h2 id=&quot;claims-about-individual-studies&quot;&gt;Claims about individual studies&lt;/h2&gt;

&lt;p&gt;The literature review discussed five studies on real-world impacts of protests. Did it represent the studies accurately?&lt;/p&gt;

&lt;p&gt;Social Change Lab discussed the observational component of &lt;strong&gt;Wasow (2020)&lt;/strong&gt;&lt;sup id=&quot;fnref:9:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; but not the quasi-experimental component.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;By looking at US counties that are similar on a number of dimensions (black population, foreign-born population, whether the county is urban/rural, etc.), Wasow is able to mimic an experiment by testing how the Democratic vote share changes in counties with protests and matching counties without protests.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I don’t think it’s reasonable to say that a matched observational design “mimic[s] an experiment”. It could be that protests were more likely to happen in counties that were &lt;em&gt;already shifting Democratic&lt;/em&gt;; you can’t prove that the protests caused the shift.&lt;/p&gt;

&lt;p&gt;I agree with everything Social Change Lab wrote about &lt;strong&gt;Madestam (2013)&lt;/strong&gt;&lt;sup id=&quot;fnref:4:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Regarding &lt;strong&gt;Klein Teeselink &amp;amp; Melios (2021)&lt;/strong&gt;&lt;sup id=&quot;fnref:6:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;, the literature review wrote:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[A] one percentage point increase in the fraction of the population going out to protest increased the Democratic vote share in that county by 5.6 percentage points[.]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;First, this figure is incorrect: it should be 3.3 percentage points (page 11). Klein Teeselink &amp;amp; Melios (2021) was revised in 2025 and I only have access to the latest revision, so it’s possible that Social Change Lab’s figure comes from the 2021 version.&lt;/p&gt;

&lt;p&gt;Second, Klein Teeselink &amp;amp; Melios’ &lt;a href=&quot;#failed-placebo-tests&quot;&gt;placebo tests&lt;/a&gt; show that the natural experiment failed to establish causality. Social Change Lab interprets the study’s outcome as causal, but I do not believe this interpretation is justified.&lt;/p&gt;

&lt;p&gt;Social Change Lab’s description of &lt;strong&gt;McVeigh, Cunningham &amp;amp; Farrell (2014)&lt;/strong&gt;&lt;sup id=&quot;fnref:2:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; was fair; it was appropriately cautious about the weakness of the paper’s evidence.&lt;/p&gt;

&lt;p&gt;On &lt;strong&gt;Bremer et al. (2019)&lt;/strong&gt;&lt;sup id=&quot;fnref:3:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;They found that whilst no such relationship existed for all 30 countries, in Western Europe did [sic] find a statistically significant interaction between protest, levels of economic hardship in a country and the loss of votes for the incumbent party.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’m suspicious of p-hacking when a study finds a non-significant main result and a significant sub-group result. I wish Social change Lab had been more skeptical of Bremer et al.’s approach.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;It seems that for a given level of economic hardship a country faces, if the number of protests increase, the incumbent political party will lose more votes[.]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This quote implies causality, which was not established—the Bremer et al. study was purely observational.&lt;/p&gt;

&lt;p&gt;In summary, Social Change Lab overstated the strength of evidence several times when reviewing particular studies. However, I believe their summary findings are still accurate, partially thanks to the two additional natural experiments (&lt;a href=&quot;https://mlarreboure.com/womenmarch.pdf&quot;&gt;Larreboure &amp;amp; González (2021)&lt;/a&gt;&lt;sup id=&quot;fnref:14:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://www.aeaweb.org/content/file?id=16104&quot;&gt;Hungerman &amp;amp; Moorthy (2023)&lt;/a&gt;&lt;sup id=&quot;fnref:36:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;) that came out more recently.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;My position on the Social Change Lab literature review:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The review was insufficiently skeptical about weak evidence, and too willing to attribute causality where it had not been established.&lt;/li&gt;
  &lt;li&gt;The review’s summary claims about the overall strength of evidence were consistent with my assessments. Perhaps the “Strong Confidence” findings were overconfident and should be “Medium Confidence” instead, but I can see arguments either way.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conducting a meta-analysis changed my view on protest effectiveness. My previous stance was that protests probably work, and that various lines of evidence pointed that way, but that all available evidence was weak. I now believe that some of the evidence is relatively&lt;sup id=&quot;fnref:42&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:42&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;34&lt;/a&gt;&lt;/sup&gt; strong, and I am more confident that protests work.&lt;/p&gt;

&lt;h1 id=&quot;source-code&quot;&gt;Source code&lt;/h1&gt;

&lt;p&gt;Source code for my meta-analysis is available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/protest_outcomes.py&quot;&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;appendix-a-additional-tables&quot;&gt;Appendix A: Additional tables&lt;/h1&gt;

&lt;p&gt;Most meta-analyses report &lt;a href=&quot;https://en.wikipedia.org/wiki/Study_heterogeneity&quot;&gt;study heterogeneity&lt;/a&gt; (I&lt;sup&gt;2&lt;/sup&gt;). I reported &lt;code&gt;P(negative effect)&lt;/code&gt; instead, which provides equivalent information, and I believe it’s more useful in this case. For completeness, Table A.1 gives the I&lt;sup&gt;2&lt;/sup&gt; values for &lt;a href=&quot;#table-3&quot;&gt;Table 3&lt;/a&gt;.&lt;/p&gt;

&lt;div id=&quot;table-a.1&quot; style=&quot;text-align:center;&quot;&gt;Table A.1: Pooled Outcomes with I&lt;sup&gt;2&lt;/sup&gt;&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Outcomes&lt;/th&gt;
      &lt;th&gt;Mean&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;likelihood ratio&lt;/th&gt;
      &lt;th&gt;p-value&lt;/th&gt;
      &lt;th&gt;I^2&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share Per Protester&lt;/td&gt;
      &lt;td&gt;11.95&lt;/td&gt;
      &lt;td&gt;4.00&lt;/td&gt;
      &lt;td&gt;87.1&lt;/td&gt;
      &lt;td&gt;0.003&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share&lt;/td&gt;
      &lt;td&gt;1.59&lt;/td&gt;
      &lt;td&gt;0.48&lt;/td&gt;
      &lt;td&gt;257&lt;/td&gt;
      &lt;td&gt;0.001&lt;/td&gt;
      &lt;td&gt;45%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share (Rain Only)&lt;/td&gt;
      &lt;td&gt;1.14&lt;/td&gt;
      &lt;td&gt;0.42&lt;/td&gt;
      &lt;td&gt;39.3&lt;/td&gt;
      &lt;td&gt;0.007&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Single Hypothesis&lt;/td&gt;
      &lt;td&gt;1.06&lt;/td&gt;
      &lt;td&gt;0.78&lt;/td&gt;
      &lt;td&gt;2.55&lt;/td&gt;
      &lt;td&gt;0.172&lt;/td&gt;
      &lt;td&gt;74%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Favorability&lt;/td&gt;
      &lt;td&gt;2.68&lt;/td&gt;
      &lt;td&gt;2.32&lt;/td&gt;
      &lt;td&gt;1.95&lt;/td&gt;
      &lt;td&gt;0.249&lt;/td&gt;
      &lt;td&gt;72%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Table A.2 reports summary statistics for the same pooled outcomes as &lt;a href=&quot;#table-3&quot;&gt;Table 3&lt;/a&gt;, plus additional outcomes from &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.3809877&quot;&gt;Klein Teeselink &amp;amp; Melios (2021)&lt;/a&gt;&lt;sup id=&quot;fnref:6:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; that I excluded from the main table. Consider this like a &lt;a href=&quot;https://en.wikipedia.org/wiki/Cross-validation_(statistics)#Leave-one-out_cross-validation&quot;&gt;leave-one-out analysis&lt;/a&gt;, except instead it’s a put-one-in analysis.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“Vote Share Per Protester” adds BLM vote share per protester (mean 3.3, std err 0.6, n = 3053; from Table 2).&lt;/li&gt;
  &lt;li&gt;The middle three rows add BLM vote share.&lt;/li&gt;
  &lt;li&gt;“Favorability” adds survey agreement rate for the statement “Blacks should not receive special favors.”&lt;/li&gt;
&lt;/ul&gt;

&lt;div id=&quot;table-a.2&quot; style=&quot;text-align:center;&quot;&gt;Table A.2: Pooled Outcomes Including BLM&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Outcomes&lt;/th&gt;
      &lt;th&gt;Mean&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;likelihood ratio&lt;/th&gt;
      &lt;th&gt;p-value&lt;/th&gt;
      &lt;th&gt;I^2&lt;/th&gt;
      &lt;th&gt;P(negative effect)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share Per Protester&lt;/td&gt;
      &lt;td&gt;7.89&lt;/td&gt;
      &lt;td&gt;3.91&lt;/td&gt;
      &lt;td&gt;7.62&lt;/td&gt;
      &lt;td&gt;0.044&lt;/td&gt;
      &lt;td&gt;65%&lt;/td&gt;
      &lt;td&gt;0.072&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share&lt;/td&gt;
      &lt;td&gt;1.71&lt;/td&gt;
      &lt;td&gt;0.43&lt;/td&gt;
      &lt;td&gt;2.84e+03&lt;/td&gt;
      &lt;td&gt;0.001&lt;/td&gt;
      &lt;td&gt;33%&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share (Rain Only)&lt;/td&gt;
      &lt;td&gt;1.32&lt;/td&gt;
      &lt;td&gt;0.41&lt;/td&gt;
      &lt;td&gt;192&lt;/td&gt;
      &lt;td&gt;0.002&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Single Hypothesis&lt;/td&gt;
      &lt;td&gt;1.36&lt;/td&gt;
      &lt;td&gt;0.68&lt;/td&gt;
      &lt;td&gt;7.6&lt;/td&gt;
      &lt;td&gt;0.045&lt;/td&gt;
      &lt;td&gt;69%&lt;/td&gt;
      &lt;td&gt;0.123&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Favorability&lt;/td&gt;
      &lt;td&gt;2.66&lt;/td&gt;
      &lt;td&gt;1.99&lt;/td&gt;
      &lt;td&gt;2.44&lt;/td&gt;
      &lt;td&gt;0.182&lt;/td&gt;
      &lt;td&gt;48%&lt;/td&gt;
      &lt;td&gt;0.136&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Table A.3 uses the same row definitions as in Table A.2, while also correcting for publication bias by creating dummy null outcomes as described &lt;a href=&quot;#publication-bias&quot;&gt;above&lt;/a&gt;.&lt;/p&gt;

&lt;div id=&quot;table-a.3&quot; style=&quot;text-align:center;&quot;&gt;Table A.3: Pooled Outcomes Including BLM, Adjusted for Publication Bias&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Outcomes&lt;/th&gt;
      &lt;th&gt;Mean&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;likelihood ratio&lt;/th&gt;
      &lt;th&gt;p-value&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share Per Protester&lt;/td&gt;
      &lt;td&gt;2.79&lt;/td&gt;
      &lt;td&gt;1.55&lt;/td&gt;
      &lt;td&gt;5&lt;/td&gt;
      &lt;td&gt;0.073&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share&lt;/td&gt;
      &lt;td&gt;0.88&lt;/td&gt;
      &lt;td&gt;0.38&lt;/td&gt;
      &lt;td&gt;14.9&lt;/td&gt;
      &lt;td&gt;0.021&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share (Rain Only)&lt;/td&gt;
      &lt;td&gt;0.70&lt;/td&gt;
      &lt;td&gt;0.36&lt;/td&gt;
      &lt;td&gt;6.69&lt;/td&gt;
      &lt;td&gt;0.052&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Favorability&lt;/td&gt;
      &lt;td&gt;0.66&lt;/td&gt;
      &lt;td&gt;0.51&lt;/td&gt;
      &lt;td&gt;2.26&lt;/td&gt;
      &lt;td&gt;0.203&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The BLM study found a smaller mean effect than the other studies, but it also had a high t-stat. Adding BLM to the pooled outcomes decreases means but does not consistently decrease the strength of evidence.&lt;/p&gt;

&lt;p&gt;As discussed &lt;a href=&quot;#teeselink--melios-2021-on-2020-black-lives-matter-protests&quot;&gt;previously&lt;/a&gt;, the BLM study isolates local effects of protests, which is undesirable—it ignores any non-local effects that protests might have. Luckily, the study also reports results with no adjustment for spatial autocorrelation (in its Table A3).&lt;/p&gt;

&lt;p&gt;In general, it’s not a good idea to ignore spatial autocorrelation because it may overstate the strength of evidence. But for the “vote share per protester” metric, the un-adjusted outcome had a &lt;em&gt;lower&lt;/em&gt; t-stat than the adjusted outcome. So I think it’s fair to add the un-adjusted result to the pooled sample.&lt;/p&gt;

&lt;p&gt;Here are the results for pooled vote share per protester, using the BLM outcome with no adjustment to spatial autocorrelation.&lt;/p&gt;

&lt;div id=&quot;table-a.4&quot; style=&quot;text-align:center;&quot;&gt;Table A.4: Pooled Outcomes for Vote Share Per Protester, Including BLM&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Outcomes&lt;/th&gt;
      &lt;th&gt;Mean&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;likelihood ratio&lt;/th&gt;
      &lt;th&gt;p-value&lt;/th&gt;
      &lt;th&gt;P(negative effect)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;no correction&lt;/td&gt;
      &lt;td&gt;11.89&lt;/td&gt;
      &lt;td&gt;2.32&lt;/td&gt;
      &lt;td&gt;4.83e5&lt;/td&gt;
      &lt;td&gt;4e-7&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;corrected for publication bias&lt;/td&gt;
      &lt;td&gt;6.23&lt;/td&gt;
      &lt;td&gt;3.05&lt;/td&gt;
      &lt;td&gt;8.1&lt;/td&gt;
      &lt;td&gt;0.041&lt;/td&gt;
      &lt;td&gt;0.138&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;My meta-analysis compared standardized outcomes (using the method described in &lt;a href=&quot;/materials/gelman2008.pdf&quot;&gt;Gelman (2007)&lt;/a&gt;&lt;sup id=&quot;fnref:58:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:58&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt;). Table A.5 shows the results from pooling unstandardized outcomes instead (excluding BLM).&lt;/p&gt;

&lt;div id=&quot;table-a.5&quot; style=&quot;text-align:center;&quot;&gt;Table A.5: Unstandardized Pooled Outcomes&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Outcomes&lt;/th&gt;
      &lt;th&gt;Mean&lt;/th&gt;
      &lt;th&gt;Std Err&lt;/th&gt;
      &lt;th&gt;likelihood ratio&lt;/th&gt;
      &lt;th&gt;p-value&lt;/th&gt;
      &lt;th&gt;P(negative effect)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share Per Protester&lt;/td&gt;
      &lt;td&gt;11.95&lt;/td&gt;
      &lt;td&gt;4.00&lt;/td&gt;
      &lt;td&gt;87.1&lt;/td&gt;
      &lt;td&gt;0.003&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share&lt;/td&gt;
      &lt;td&gt;3.31&lt;/td&gt;
      &lt;td&gt;1.37&lt;/td&gt;
      &lt;td&gt;18.1&lt;/td&gt;
      &lt;td&gt;0.017&lt;/td&gt;
      &lt;td&gt;0.04&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Vote Share (Rain Only)&lt;/td&gt;
      &lt;td&gt;1.94&lt;/td&gt;
      &lt;td&gt;1.02&lt;/td&gt;
      &lt;td&gt;6.13&lt;/td&gt;
      &lt;td&gt;0.057&lt;/td&gt;
      &lt;td&gt;0.011&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Single Hypothesis&lt;/td&gt;
      &lt;td&gt;1.66&lt;/td&gt;
      &lt;td&gt;1.77&lt;/td&gt;
      &lt;td&gt;1.55&lt;/td&gt;
      &lt;td&gt;0.349&lt;/td&gt;
      &lt;td&gt;0.293&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Favorability&lt;/td&gt;
      &lt;td&gt;5.20&lt;/td&gt;
      &lt;td&gt;1.84&lt;/td&gt;
      &lt;td&gt;53.8&lt;/td&gt;
      &lt;td&gt;0.005&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h1 id=&quot;appendix-b-methodological-revisions&quot;&gt;Appendix B: Methodological revisions&lt;/h1&gt;

&lt;p&gt;In the interest of transparency—and because I didn’t pre-register a methodology—here is a list of non-trivial revisions I made in the process of writing this article. In chronological order:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Originally, I used a different method for estimating the effect of publication bias. Instead of creating dummy null clones as described &lt;a href=&quot;#publication-bias&quot;&gt;above&lt;/a&gt;, I created &lt;code&gt;k&lt;/code&gt; null dummies (one for each real outcome) that were all identical, and that took their standard error as the average of the real studies’ standard errors. This method produced lower p-values. However, I decided it was too unrealistic to give all three null dummies the exact same summary statistics.&lt;/li&gt;
  &lt;li&gt;For BLM and Women’s March results, I originally used the primary outcomes as reported by their respective papers. I revised my meta-analysis to use the weakest outcomes (lowest t-stat) from the robustness checks, to more conservatively account for spatial autocorrelation.&lt;/li&gt;
  &lt;li&gt;I originally included BLM outcomes in the meta-analysis. Upon re-reading the BLM paper, I realized it failed its placebo tests, so I removed it. &lt;a href=&quot;#appendix-a-additional-tables&quot;&gt;Appendix A&lt;/a&gt; shows the results when including BLM.&lt;/li&gt;
  &lt;li&gt;I went back to using the main outcome for BLM instead of a robustness check outcome because it’s simpler, and I wasn’t including BLM in the main meta-analysis anyway. This slightly strengthened the reported results in &lt;a href=&quot;#appendix-a-additional-tables&quot;&gt;Appendix A&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Originally, my meta-analysis used unstandardized means. I wanted to use standardized means but I wasn’t sure how to standardize them. Eventually, I found &lt;a href=&quot;/materials/gelman2008.pdf&quot;&gt;Gelman (2007)&lt;/a&gt;&lt;sup id=&quot;fnref:58:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:58&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt; and used its method for standardizing outcomes. This increased most results’ t-stats because it decreased between-study variance. (However, it decreased the Favorability t-stat.) Unstandardized pooled outcomes are reported in &lt;a href=&quot;#table-a.5&quot;&gt;Table A.5&lt;/a&gt;.
    &lt;ul&gt;
      &lt;li&gt;Initially, I wanted to scale all binary outcomes by their standard deviations, but I did not have the necessary data to calculate all standard deviations, so I simply left them unscaled.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h1 id=&quot;appendix-c-comparing-the-strength-of-evidence-to-saturated-fat-research&quot;&gt;Appendix C: Comparing the strength of evidence to saturated fat research&lt;/h1&gt;

&lt;p&gt;To get some perspective on the strength of evidence on protests, I would like to compare it to a thorny question in an unrelated field that I reviewed recently.&lt;/p&gt;

&lt;p&gt;Last year I &lt;a href=&quot;https://mdickens.me/2024/09/26/outlive_a_critical_review/#the-data-are-unclear-on-whether-reducing-saturated-fat-intake-is-beneficial&quot;&gt;examined the evidence&lt;/a&gt; on whether saturated fat is unhealthy, primarily focusing on a &lt;a href=&quot;https://doi.org/10.1002/14651858.cd011737.pub3&quot;&gt;Cochrane review&lt;/a&gt;&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;35&lt;/a&gt;&lt;/sup&gt; of &lt;a href=&quot;https://en.wikipedia.org/wiki/Randomized_controlled_trial&quot;&gt;RCTs&lt;/a&gt;. I ultimately decided I was 85% confident that saturated fat is unhealthy.&lt;/p&gt;

&lt;p&gt;How does the evidence on protest effectiveness compare to the evidence on saturated fat?&lt;/p&gt;

&lt;p&gt;Both hypotheses face similar problems: there are many observational studies that support the hypothesis, but few experiments. (In the case of protests, there are &lt;em&gt;no&lt;/em&gt; (real-world) experiments, but there are some natural experiments.)&lt;/p&gt;

&lt;p&gt;The Cochrane review included 15 RCTs. My review included five natural experiments.&lt;/p&gt;

&lt;p&gt;The studies in the Cochrane review were true experiments. The protests studies were not true experiments, which means they can’t establish causality quite as firmly.&lt;/p&gt;

&lt;p&gt;The Cochrane review found no evidence of publication bias (&lt;a href=&quot;https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD011737.pub3/media/CDSR/CD011737/image_n/nCD011737-FIG-03.svg&quot;&gt;Figure 3&lt;/a&gt;). There aren’t enough protest quasi-experiments to test for publication bias, but I did &lt;a href=&quot;#publication-bias&quot;&gt;test&lt;/a&gt; what would happen if I added in some hypothetical null-result studies.&lt;/p&gt;

&lt;p&gt;Three individual studies on saturated fat had statistically significant positive effects, six had non-significant positive effects, and four had non-significant negative effects. (Two studies did not report data on cardiovascular events.)&lt;/p&gt;

&lt;p&gt;In my review, three out of three studies (or four out of four&lt;sup id=&quot;fnref:55&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:55&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;36&lt;/a&gt;&lt;/sup&gt;) found statistically significant positive effects for &lt;em&gt;nonviolent&lt;/em&gt; protests, and one study found statistically significant negative effects for &lt;em&gt;violent&lt;/em&gt; protests.&lt;/p&gt;

&lt;p&gt;When pooling all RCTs together, the Cochrane review found a marginally statistically significant effect of saturated fat reduction on cardiovascular events (95% CI [0.70, 0.98] where 1 = no effect). It also found significant effects on short-term health outcomes like weight and cholesterol. It found positive but non-significant results on mortality outcomes (including all-cause mortality and cardiovascular mortality).&lt;/p&gt;

&lt;p&gt;In my meta-analysis, most of my &lt;a href=&quot;#table-3&quot;&gt;pooled samples&lt;/a&gt; had strong positive results. My primary metric had p &amp;lt; 0.003; the Cochrane review didn’t find any primary results that strong.&lt;/p&gt;

&lt;p&gt;Even though the Cochrane review included three times as many studies, I think the evidence on protest outcomes is stronger:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The saturated fat RCTs got mixed results, but the three or four studies on nonviolent protests all pointed the same direction.&lt;/li&gt;
  &lt;li&gt;(Most of) the pooled outcomes for protests had moderate to strong p-values. The pooled outcomes for saturated fat reduction had moderate p-values at best.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On the other hand, the protest studies have to deal with the &lt;a href=&quot;#spatial-autocorrelation&quot;&gt;spatial autocorrelation&lt;/a&gt; problem, and it’s not entirely clear that they succeeded at establishing causation. The saturated fat studies were experiments; they had no analogous problem.&lt;/p&gt;

&lt;p&gt;The smaller number of studies also means the protests meta-analysis is more vulnerable to errors in any one study.&lt;/p&gt;

&lt;p&gt;It’s a judgment call as to whether you think the weaker methodology for protest studies outweighs the stronger likelihood ratios. I’m inclined to say it doesn’t.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ozden, J., &amp;amp; Glover, S. (2022). &lt;a href=&quot;https://www.socialchangelab.org/_files/ugd/503ba4_94d84534d5b348468739b0d6a36b3940.pdf&quot;&gt;Literature Review: Protest Outcomes.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ozden, J., &amp;amp; Glover, S. (2022). &lt;a href=&quot;https://www.socialchangelab.org/_files/ugd/503ba4_052959e2ee8d4924934b7efe3916981e.pdf&quot;&gt;Protest movements: How effective are they?&lt;/a&gt; &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:50&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Orazani, N., Tabri, N., Wohl, M. J. A., &amp;amp; Leidner, B. (2021). &lt;a href=&quot;https://doi.org/10.1002/ejsp.2722&quot;&gt;Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:50&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:50:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:50:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:50:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;ChatGPT Deep Research was useful for finding and summarizing studies, but not for assessing their quality. When I asked ChatGPT to only include methodologically rigorous studies in the review, it didn’t appear to change which studies it included, it just rationalized why every study was rigorous. It said things like (paraphrasing) “we know this observational study’s findings are robust because it &lt;a href=&quot;https://dynomight.net/control/&quot;&gt;controlled for confounders&lt;/a&gt;” and “because it had a large sample size” (??). &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;McVeigh, R., Cunningham, D., &amp;amp; Farrell, J. (2014). &lt;a href=&quot;https://doi.org/10.1177/0003122414555885&quot;&gt;Political Polarization as a Social Movement Outcome.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:2:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:2:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bremer, B., Hutter, S., &amp;amp; Kriesi, H. (2020). &lt;a href=&quot;https://doi.org/10.1111/1475-6765.12375&quot;&gt;Dynamics of protest and electoral politics in the Great Recession.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:3:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:3:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:36&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hungerman, D., &amp;amp; Moorthy, V. (2023). &lt;a href=&quot;https://www.aeaweb.org/content/file?id=16104&quot;&gt;Every Day Is Earth Day: Evidence on the Long-Term Impact of Environmental Activism.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1257/app.20210045&quot;&gt;10.1257/app.20210045&lt;/a&gt; &lt;a href=&quot;#fnref:36&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:36:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:36:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:36:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:46&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mellon, J. (2024). &lt;a href=&quot;https://doi.org/10.1111/ajps.12894&quot;&gt;Rain, rain, go away: 194 potential exclusion-restriction violations for studies using weather as an instrumental variable.&lt;/a&gt; &lt;a href=&quot;#fnref:46&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:47&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Sarsons, H. (2015). &lt;a href=&quot;https://doi.org/10.1016/j.jdeveco.2014.12.007&quot;&gt;Rainfall and conflict: A cautionary tale.&lt;/a&gt; &lt;a href=&quot;#fnref:47&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Madestam, A., Shoag, D., Veuger, S., &amp;amp; Yanagizawa-Drott, D. (2013). &lt;a href=&quot;https://doi.org/10.1093/qje/qjt021&quot;&gt;Do Political Protests Matter? Evidence from the Tea Party Movement.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:4:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:4:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Wasow, O. (2020). &lt;a href=&quot;https://doi.org/10.1017/S000305542000009X&quot;&gt;Agenda Seeding: How 1960s Black Protests Moved Elites, Public Opinion and Voting.&lt;/a&gt;. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:9:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:9:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:9:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Klein Teeselink, B., &amp;amp; Melios, G. (2021). &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.3809877&quot;&gt;Weather to Protest: The Effect of Black Lives Matter Protests on the 2020 Presidential Election.&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:6:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:6:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:6:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Larreboure, M., &amp;amp; Gonzalez, F. (2021). &lt;a href=&quot;https://mlarreboure.com/womenmarch.pdf&quot;&gt;The Impact of the Women’s March on the U.S. House Election.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:14:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:14:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:14:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Braemer et al. (2020) found a non-significant result and then did some subgroup analysis and got a significant result. I find that suspicious but I didn’t bother to look deeper because it’s an observational study anyway. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The paper did not report the p-value, but it did report the standard error, so I calculated the p-value from that. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:48&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Collins, W. J., &amp;amp; Margo, R. A. (2007). &lt;a href=&quot;https://doi.org/10.1017/S0022050707000423&quot;&gt;The Economic Aftermath of the 1960s Riots in American Cities: Evidence from Property Values.&lt;/a&gt; &lt;a href=&quot;#fnref:48&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:45&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Beck, N., Gleditsch, K. S., &amp;amp; Beardsley, K. (2006). &lt;a href=&quot;https://doi.org/10.1111/j.1468-2478.2006.00391.x&quot;&gt;Space Is More than Geography: Using Spatial Econometrics in the Study of Political Economy.&lt;/a&gt; &lt;a href=&quot;#fnref:45&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:49&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If it were true, we’d expect to see a similar phenomenon in the placebo tests of Madestam et al. (2013) and Hungerman &amp;amp; Moorthy (2023), but we don’t. &lt;a href=&quot;#fnref:49&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Belloni, A., Chernozhukov, V., &amp;amp; Hansen, C. (2010). &lt;a href=&quot;https://arxiv.org/abs/1012.1297&quot;&gt;LASSO Methods for Gaussian Instrumental Variables Models.&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:43&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Conley, T. G. (1999). &lt;a href=&quot;https://doi.org/10.1016/S0304-4076(98)00084-0&quot;&gt;GMM estimation with cross sectional dependence.&lt;/a&gt;. &lt;a href=&quot;#fnref:43&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I emailed the corresponding author to ask about this apparent discrepancy and did not receive a reply. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:57&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hungerman &amp;amp; Moorthy (2023) did provide enough information to &lt;em&gt;estimate&lt;/em&gt; the change in Earth Day favorability per protester, by dividing change in favorability by change in number of protesters from the paper’s Table 4. However, this estimate would have high variance on the denominator, which makes the result unreliable. &lt;a href=&quot;#fnref:57&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Favorability (1) measured public support for environmentalism as the percentage of respondents answering Yes to “we’re spending too little money” on protecting the environment. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:21:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:21:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:21:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:21:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;5&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:21:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;6&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:58&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Gelman, A. (2007). &lt;a href=&quot;https://doi.org/10.1002/sim.3107&quot;&gt;Scaling regression inputs by dividing by two standard deviations.&lt;/a&gt; &lt;a href=&quot;#fnref:58&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:58:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:58:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:59&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some additional detail:&lt;/p&gt;

      &lt;p&gt;Gelman (2007) proposes scaling continuous variables by 2 standard deviations because this puts them onto the same scale as a binary variable &lt;em&gt;where the control and treatment groups have the same size&lt;/em&gt;. If you have a sample of binary outcomes where 50% of the outcomes are 0 (“no rain”) and 50% are 1 (“rain”), then the standard deviation is 0.5. If the probabilities are not 50/50 then the standard deviation will not equal 0.5. (For example, the Tea Party rainfall variable had a standard deviation of 0.401.) Arguably it would make sense to scale all binary variables to a standard deviation of 0.5. However, I did not do this because I didn’t have the necessary data for all the papers. Instead, I left all binary variables unscaled. (Gelman (2007) discusses whether probability-skewed binary variables should be scaled, but ultimately does not take a stance.)&lt;/p&gt;

      &lt;p&gt;The Earth Day paper directly regressed outcomes onto a continuous rainfall variable (without doing a two-stage regression). I scaled the reported slopes by 2 times the standard deviation of rainfall.&lt;/p&gt;

      &lt;p&gt;The Women’s March paper reported values scaled to 1 standard deviation, so I divided them by 2.&lt;/p&gt;

      &lt;p&gt;The Tea Party, BLM, and Civil Rights papers reported effects in binary terms, so I did not scale them. &lt;a href=&quot;#fnref:59&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:59:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:60&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Even if I replace the relatively weak Earth Day favorability outcome with Earth Day favorability among under-20s (which had a likelihood ratio of 37), the pooled likelihood ratio is still only 3.43. &lt;a href=&quot;#fnref:60&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:30&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Defined as the number of Congress members who voted in line with conservative positions according to the American Conservative Union. &lt;a href=&quot;#fnref:30&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:31&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Defined as alignment with Tea Party positions, measured in standard deviations. &lt;a href=&quot;#fnref:31&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:33&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Evaluated on a 5-point scale from “strongly disagree” to “strongly agree”. &lt;a href=&quot;#fnref:33&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:33:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:51&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The linked webinar is the least-bad explanation of Egger’s test that I could find, but it doesn’t explain it very well so I will attempt to explain:&lt;/p&gt;

      &lt;p&gt;In the presence of publication bias, more powerful studies will have lower means. the small low-mean studies don’t get published. Therefore there will be a negative correlation between a study’s power and its mean. (I measured power as the inverse standard error but you could also use the inverse variance or the sample size.)&lt;/p&gt;

      &lt;p&gt;So you test for publication bias by doing a linear regression of study means on study power.&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;If the regression has a correlation close to zero, that indicates no publication bias.&lt;/li&gt;
        &lt;li&gt;If there is a significant correlation, that’s evidence of publication bias.&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;Lest this footnote give the impression that I know what I’m talking about, I didn’t even know what Egger’s regression test was until I wrote this. My process was that I asked Claude what statistical test I could use to check for publication bias, it suggested Egger’s test and then gave an obviously-incorrect explanation of how the test works, and then I read several barely-comprehensible articles about the test until I thought I understood it. &lt;a href=&quot;#fnref:51&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:54&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I could do full replications for the papers that published their data, but that would be considerably more work for a low chance of paying off. &lt;a href=&quot;#fnref:54&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:52&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In my experience, I always update tables if I make revisions to my calculations, but it’s hard to keep track of everywhere in the text that I referenced a number. &lt;a href=&quot;#fnref:52&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:35&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, one of them did look at media coverage, but not using the rainfall method. &lt;a href=&quot;#fnref:35&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:42&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m thinking about the strength of evidence from a sociology perspective. Getting good evidence in sociology is hard. &lt;a href=&quot;#fnref:42&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hooper, L., Martin, N., Jimoh, O. F., Kirk, C., Foster, E., &amp;amp; Abdelhamid, A. S. (2020). &lt;a href=&quot;https://doi.org/10.1002/14651858.cd011737.pub3&quot;&gt;Reduction in saturated fat intake for cardiovascular disease.&lt;/a&gt; &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:55&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Three if you exclude the BLM study due to its &lt;a href=&quot;#failed-placebo-tests&quot;&gt;failed placebo tests&lt;/a&gt;; four if you include it. &lt;a href=&quot;#fnref:55&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>The Triple-Interaction-Effects Argument</title>
				<pubDate>Thu, 10 Apr 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/04/10/triple_interaction_effects/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/04/10/triple_interaction_effects/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;In this post I will explain the most impressive argument I heard in 2024.&lt;/p&gt;

&lt;p&gt;First, some context:&lt;/p&gt;

&lt;p&gt;There is an ongoing debate in the bodybuilding/strength training community about how much protein you should eat while losing weight.&lt;/p&gt;

&lt;p&gt;Some say you should eat more protein if you’re losing weight:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;If you’re eating less, your body is under extra pressure to cannibalize your muscles. Therefore, you should eat more protein to cancel this out.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The standard rebuttal:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Experimental trials have found that muscle gains max out when subjects eat 0.7–0.8 grams of protein per pound of bodyweight, and that’s true both when participants are maintaining weight and when they’re losing weight. There doesn’t appear to be a difference.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And the counter-rebuttal:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Almost all research looks at novice lifters. Experienced athletes have a more difficult time gaining muscle,&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; so losing weight will have a bigger negative impact on them, and therefore they need to eat more protein.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I used to believe this. Then I heard the most impressive argument of 2024.&lt;/p&gt;

&lt;p&gt;I heard the argument in a &lt;a href=&quot;https://www.youtube.com/watch?v=__hRCUDVJx0&amp;amp;t=322s&quot;&gt;YouTube video&lt;/a&gt; by Menno Henselmans:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;It’s possible that in trained individuals there is a triple interaction effect, because that’s what you’re arguing here. If you’re saying that protein requirements increase in an energy deficit, but only in strength-trained individuals, then you are arguing for a triple interaction effect. […] That is very, very, very rare. Triple interaction effects, biologically speaking, simply do not occur much.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I didn’t understand what he was talking about. I spent two days pondering what it meant. On the third day, it finally clicked and I realized he was right.&lt;/p&gt;

&lt;p&gt;To claim that trained lifters should eat more protein on an energy deficit, you’d need to believe that:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Above a certain level of protein intake (0.7–0.8 grams per pound), additional protein has no effect on muscle growth.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Most of the time, trained athletes don’t need more protein than novices.&lt;/li&gt;
  &lt;li&gt;Novices don’t need more protein while losing weight than while maintaining/gaining weight.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;HOWEVER&lt;/strong&gt;, (a) among trained individuals who are (b) losing weight, the ones (c) who eat more protein (beyond 0.7–0.8 g/lb) gain more muscle.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first variable (protein intake) has no interaction with muscle growth.&lt;/p&gt;

&lt;p&gt;The second variable (trained vs. untrained) has no interaction with muscle growth.&lt;/p&gt;

&lt;p&gt;The third variable (losing vs. maintaining weight) has no interaction with muscle growth.&lt;/p&gt;

&lt;p&gt;The first and second variables together (protein intake + trained/untrained) have no interaction with muscle growth.&lt;/p&gt;

&lt;p&gt;The first and third variables together (protein intake + losing/maintaining weight) have no interaction with muscle growth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HOWEVER&lt;/strong&gt;, when you put all three variables together, an interaction suddenly appears—a triple interaction effect.&lt;/p&gt;

&lt;p&gt;This is a very strange claim. If all three variables together affect muscle growth, then you would expect each variable &lt;em&gt;individually&lt;/em&gt; to affect muscle growth. And at least you would expect two out of three variables together to affect muscle growth.&lt;/p&gt;

&lt;p&gt;(In fact, it is mathematically impossible to construct a differentiable function &lt;code&gt;f(x, y, z)&lt;/code&gt; that is constant with respect to x, constant with respect to y, and constant with respect to z, but &lt;em&gt;not&lt;/em&gt; constant overall. Although you could have a function &lt;code&gt;f(x, y, z)&lt;/code&gt; where the slope with respect to each individual variable is &lt;em&gt;close to&lt;/em&gt; 0, but not &lt;em&gt;quite&lt;/em&gt; 0.)&lt;/p&gt;

&lt;p&gt;Not to say a triple interaction effect can’t occur in the real world. It could be that muscle growth does depend on each of (protein intake, training experience, calorie deficit), but the relationships are so weak that the studies failed to pick them up.&lt;/p&gt;

&lt;p&gt;But if you believe the studies’ results are correct, then it seems difficult—maybe even impossible—to still believe that trained lifters need to eat more protein while on a calorie deficit.&lt;/p&gt;

&lt;div style=&quot;text-align:center&quot;&gt;***&lt;/div&gt;

&lt;p&gt;This was the best argument I heard in 2024 because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If you think about it, it’s obviously correct. It changed my mind as soon as I understood it.&lt;/li&gt;
  &lt;li&gt;It’s difficult to come up with. (I’ve never heard anyone else make this argument.)&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m conflating gaining strength with putting on muscle. There’s a difference, but we can consider them the same thing for the purposes of this post. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This claim is somewhat controversial, but let’s assume it’s true for the sake of this argument.&lt;/p&gt;

      &lt;p&gt;Randomized controlled trials find no benefit to more than ~0.7 g/lb, and I quoted a range of 0.7–0.8 g/lb to account for variation between individuals. But the existing studies aren’t that great so I don’t have high confidence that that’s the correct range. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>You Can Now Read My Reading Notes</title>
				<pubDate>Mon, 31 Mar 2025 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2025/03/31/new_reading_notes/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/03/31/new_reading_notes/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Since 2015, I have been taking notes on most articles I read. I figured other people might find them useful, so I cleaned them up and &lt;a href=&quot;https://mdickens.me/reading-notes/&quot;&gt;published them on my website&lt;/a&gt;. You can find them via the new “Notes” tab.&lt;/p&gt;

&lt;p&gt;I will update the page every once in a while as I read more articles and take more notes.&lt;/p&gt;

&lt;p&gt;I also have notes on every educational book I’ve read since 2015, but the notes are on physical paper (can you believe it?). I might digitize them at some point.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>There Are Three Kinds of "No Evidence"</title>
				<pubDate>Mon, 03 Mar 2025 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2025/03/03/three_kinds_of_no_evidence/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/03/03/three_kinds_of_no_evidence/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;David J. Balan once proposed that &lt;a href=&quot;https://www.overcomingbias.com/p/doctor-there-arhtml&quot;&gt;there are two kinds of “no evidence”&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;There have been lots of studies directly on this point which came back with the result that the hypothesis is false.&lt;/li&gt;
  &lt;li&gt;There is no evidence because there are few or no relevant studies.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I propose that there are three kinds of “no evidence”:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The hypothesis has never been studied.&lt;/li&gt;
  &lt;li&gt;There are studies, the studies failed to find supporting evidence, but they wouldn’t have found supporting evidence even if the hypothesis were true.&lt;/li&gt;
  &lt;li&gt;There are studies, the studies &lt;em&gt;should&lt;/em&gt; have found supporting evidence if the hypothesis were true, and they didn’t.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example of type 1: A 2003 literature review found that there were &lt;a href=&quot;https://doi.org/10.1136/bmj.327.7429.1459&quot;&gt;no studies&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; showing that parachutes could prevent injury when jumping out of a plane.&lt;/p&gt;

&lt;p&gt;Example of type 2: In 2018, there was finally &lt;a href=&quot;https://doi.org/10.1136/bmj.k5094&quot;&gt;a randomized controlled trial&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; on the effectiveness of parachutes, and it found no difference between the parachute group and the control group. However, participants only jumped from a height of 0.6 meters (~2 feet). I don’t know about you, but this result does not make me want to jump out of a plane without a parachute.&lt;/p&gt;

&lt;p&gt;Like in the parachute example, you see type-2 “no evidence” whenever the conditions of a study don’t match the real-world environment. You also see type-2 “no evidence” when an experiment is &lt;a href=&quot;https://en.wikipedia.org/wiki/Power_(statistics)&quot;&gt;underpowered&lt;/a&gt;. Say you want to test the hypothesis that boys are taller than girls. So you go find your niece Sally and your neighbor’s son James and it turns out Sally is an inch taller than James. Your methodology was valid—you can indeed test the hypothesis by finding some people and measuring their heights—but your sample size was too small.&lt;/p&gt;

&lt;p&gt;(The difference between type 2 and type 3 can be a matter of degree. The more powerful a study is, the stronger its “no evidence” if it fails to find an effect.)&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Smith, G. C. S. (2003). &lt;a href=&quot;https://doi.org/10.1136/bmj.327.7429.1459&quot;&gt;Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Yeh, R. W., Valsdottir, L. R., Yeh, M. W., Shen, C., Kramer, D. B., Strom, J. B., Secemsky, E. A. et al. (2018). &lt;a href=&quot;https://doi.org/10.1136/bmj.k5094&quot;&gt;Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Return Stacked Funds: A New Way to Get Leverage</title>
				<pubDate>Tue, 04 Feb 2025 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2025/02/04/return_stacked_funds/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/02/04/return_stacked_funds/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Some &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/&quot;&gt;people&lt;/a&gt; (including &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;me&lt;/a&gt;)  have argued that altruists often benefit from leveraging their investments. Recently, it has become easier to use leverage thanks to the emergence of &lt;a href=&quot;https://www.returnstacked.com/what-is-return-stacking-for-diversification/&quot;&gt;return stacked&lt;/a&gt; funds.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is not financial advice.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-is-return-stacking&quot; id=&quot;markdown-toc-what-is-return-stacking&quot;&gt;What is return stacking?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#an-overview-of-return-stacking-funds&quot; id=&quot;markdown-toc-an-overview-of-return-stacking-funds&quot;&gt;An overview of return stacking funds&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#the-true-cost-of-return-stacked-etfs&quot; id=&quot;markdown-toc-the-true-cost-of-return-stacked-etfs&quot;&gt;The true cost of return stacked ETFs&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#pros-and-cons-of-return-stacked-funds&quot; id=&quot;markdown-toc-pros-and-cons-of-return-stacked-funds&quot;&gt;Pros and cons of return stacked funds&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#are-bonds-a-good-investment&quot; id=&quot;markdown-toc-are-bonds-a-good-investment&quot;&gt;Are bonds a good investment?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#source-code&quot; id=&quot;markdown-toc-source-code&quot;&gt;Source code&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgments&quot; id=&quot;markdown-toc-acknowledgments&quot;&gt;Acknowledgments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;what-is-return-stacking&quot;&gt;What is return stacking?&lt;/h2&gt;

&lt;p&gt;Return stacking is a way of getting up to leveraged exposure to multiple return streams simultaneously. For example, &lt;a href=&quot;https://www.returnstackedetfs.com/rssb-return-stacked-global-stocks-bonds/&quot;&gt;RSSB&lt;/a&gt; invests 100% into global equities and 100% into US Treasury bonds, effectively giving it 2:1 leverage on a diversified stock/bond portfolio.&lt;/p&gt;

&lt;p&gt;A return stacked ETF is a type of leveraged ETF. But whereas traditional leveraged ETFs (such as &lt;a href=&quot;https://etfdb.com/etf/SPXL/&quot;&gt;SPXL&lt;/a&gt;) lever up a single index like the S&amp;amp;P 500, a return stacked fund holds multiple asset classes.&lt;/p&gt;

&lt;p&gt;Return stacked ETFs have lower management fees than single-index leveraged ETFs, and (with low confidence) they appear to have lower overhead costs for reasons that are not entirely clear to me (my guess is a combination of cheaper borrowing costs + transaction costs).&lt;/p&gt;

&lt;h2 id=&quot;an-overview-of-return-stacking-funds&quot;&gt;An overview of return stacking funds&lt;/h2&gt;

&lt;p&gt;There are four brands of return stacking funds that I know of:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;the eponymous &lt;a href=&quot;https://www.returnstackedetfs.com/&quot;&gt;Return Stacked&lt;/a&gt; ETFs (&lt;a href=&quot;https://www.returnstackedetfs.com/rssb-return-stacked-global-stocks-bonds/&quot;&gt;RSSB&lt;/a&gt;, &lt;a href=&quot;https://www.returnstackedetfs.com/rsst-return-stacked-us-stocks-managed-futures/&quot;&gt;RSST&lt;/a&gt;, &lt;a href=&quot;https://www.returnstackedetfs.com/rsbt-return-stacked-bonds-managed-futures/&quot;&gt;RSBT&lt;/a&gt;, &lt;a href=&quot;https://www.returnstackedetfs.com/rssy-return-stacked-us-stocks-futures-yield/&quot;&gt;RSSY&lt;/a&gt;, &lt;a href=&quot;https://www.returnstackedetfs.com/rsby-return-stacked-bonds-futures-yield/&quot;&gt;RSBY&lt;/a&gt;, &lt;a href=&quot;https://www.returnstackedetfs.com/rsba-return-stacked-bonds-merger-arbitrage/&quot;&gt;RSBA&lt;/a&gt;, &lt;a href=&quot;https://etfdb.com/etf/BTGD&quot;&gt;BTGD&lt;/a&gt;, &lt;a href=&quot;https://rationalmf.com/funds/return-stacked-balanced-allocation-systematic-macro-fund-rdmax-rdmcx-rdmix/&quot;&gt;RDMIX&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;WisdomTree Capital Efficient ETFs (&lt;a href=&quot;https://www.wisdomtree.com/investments/etfs/capital-efficient/NTSX&quot;&gt;NTSX&lt;/a&gt;, &lt;a href=&quot;https://www.wisdomtree.com/investments/etfs/capital-efficient/NTSI&quot;&gt;NTSI&lt;/a&gt;, &lt;a href=&quot;https://www.wisdomtree.com/investments/etfs/capital-efficient/NTSE&quot;&gt;NTSE&lt;/a&gt;, &lt;a href=&quot;https://www.wisdomtree.com/investments/etfs/capital-efficient/GDE&quot;&gt;GDE&lt;/a&gt;, &lt;a href=&quot;https://www.wisdomtree.com/investments/etfs/capital-efficient/GDMN&quot;&gt;GDMN&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;PIMCO StocksPLUS Long Duration Fund (&lt;a href=&quot;https://www.pimco.com/us/en/investments/mutual-fund/pimco-stocksplus-long-duration-fund/inst-usd&quot;&gt;PSLDX&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;Evoke Advisors Ultra Risk Parity ETF (&lt;a href=&quot;https://www.rparetf.com/upar&quot;&gt;UPAR&lt;/a&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The PIMCO fund has been around since 2007, but the others only launched within the last few years.&lt;/p&gt;

&lt;p&gt;What do each of these funds invest in?&lt;/p&gt;

&lt;p&gt;Six of the funds stack traditional asset classes:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Fund&lt;/th&gt;
      &lt;th&gt;first asset class&lt;/th&gt;
      &lt;th&gt;second asset class&lt;/th&gt;
      &lt;th&gt;leverage&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;RSSB&lt;/td&gt;
      &lt;td&gt;100% global stocks&lt;/td&gt;
      &lt;td&gt;100% US Treasury bonds&lt;/td&gt;
      &lt;td&gt;2:1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;NTSX&lt;/td&gt;
      &lt;td&gt;90% US stocks&lt;/td&gt;
      &lt;td&gt;60% US Treasury bonds&lt;/td&gt;
      &lt;td&gt;1.5:1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;NTSI&lt;/td&gt;
      &lt;td&gt;90% international stocks&lt;/td&gt;
      &lt;td&gt;60% US Treasury bonds&lt;/td&gt;
      &lt;td&gt;1.5:1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;NTSE&lt;/td&gt;
      &lt;td&gt;90% emerging market stocks&lt;/td&gt;
      &lt;td&gt;60% US Treasury bonds&lt;/td&gt;
      &lt;td&gt;1.5:1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UPAR&lt;/td&gt;
      &lt;td&gt;too many for this table*&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;1.68:1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;PSLDX&lt;/td&gt;
      &lt;td&gt;~100% US stocks**&lt;/td&gt;
      &lt;td&gt;~100% bonds**&lt;/td&gt;
      &lt;td&gt;~2:1**&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;*UPAR &lt;a href=&quot;https://www.rparetf.com/upar/investment-case&quot;&gt;targets&lt;/a&gt; 17.5% U.S. equities, 7% international equities, 10.5% emerging markets equities, 21% commodity producer equities, 14% gold, 49% &lt;a href=&quot;https://en.wikipedia.org/wiki/United_States_Treasury_security#TIPS&quot;&gt;TIPS&lt;/a&gt;, and 49% Treasuries for a total allocation of 168%.&lt;/p&gt;

&lt;p&gt;**PSLDX percentages are only approximate because the fund is actively managed and its holdings vary over time.&lt;/p&gt;

&lt;p&gt;These funds stack traditional asset classes with alternatives:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Fund&lt;/th&gt;
      &lt;th&gt;first asset class&lt;/th&gt;
      &lt;th&gt;second asset class&lt;/th&gt;
      &lt;th&gt;leverage&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;RSST&lt;/td&gt;
      &lt;td&gt;100% US stocks&lt;/td&gt;
      &lt;td&gt;100% managed futures*&lt;/td&gt;
      &lt;td&gt;2:1*&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;RSBT&lt;/td&gt;
      &lt;td&gt;100% US bonds&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;100% managed futures*&lt;/td&gt;
      &lt;td&gt;2:1*&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;RSSY&lt;/td&gt;
      &lt;td&gt;100% US stocks&lt;/td&gt;
      &lt;td&gt;100% futures yield*&lt;/td&gt;
      &lt;td&gt;2:1*&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;RSBY&lt;/td&gt;
      &lt;td&gt;100% US bonds&lt;/td&gt;
      &lt;td&gt;100% futures yield*&lt;/td&gt;
      &lt;td&gt;2:1*&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;RSBA&lt;/td&gt;
      &lt;td&gt;100% US Treasury bonds&lt;/td&gt;
      &lt;td&gt;100% merger arbitrage*&lt;/td&gt;
      &lt;td&gt;2:1*&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;RDMIX&lt;/td&gt;
      &lt;td&gt;50/50 US stocks/bonds&lt;/td&gt;
      &lt;td&gt;100% systematic macro*&lt;/td&gt;
      &lt;td&gt;2:1*&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;BTGD&lt;/td&gt;
      &lt;td&gt;100% bitcoin&lt;/td&gt;
      &lt;td&gt;100% gold&lt;/td&gt;
      &lt;td&gt;2:1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;GDE&lt;/td&gt;
      &lt;td&gt;90% US stocks&lt;/td&gt;
      &lt;td&gt;90% gold&lt;/td&gt;
      &lt;td&gt;1.8:1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;GDMN&lt;/td&gt;
      &lt;td&gt;90% gold miner stocks&lt;/td&gt;
      &lt;td&gt;90% gold&lt;/td&gt;
      &lt;td&gt;1.8:1&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;*Managed futures (a.k.a. &lt;a href=&quot;https://en.wikipedia.org/wiki/Trend_following&quot;&gt;trendfollowing&lt;/a&gt;), futures yield (a.k.a. &lt;a href=&quot;https://en.wikipedia.org/wiki/Carry_(investment)&quot;&gt;carry&lt;/a&gt; or &lt;a href=&quot;https://en.wikipedia.org/wiki/Roll_yield&quot;&gt;roll yield&lt;/a&gt;), &lt;a href=&quot;https://en.wikipedia.org/wiki/Risk_arbitrage&quot;&gt;merger arbitrage&lt;/a&gt;, and &lt;a href=&quot;https://en.wikipedia.org/wiki/Global_macro&quot;&gt;systematic macro&lt;/a&gt; are all long/short strategies, not simple assets that you can buy and hold. So it’s somewhat arbitrary to say that the funds invest 100% into those strategies.&lt;/p&gt;

&lt;h2 id=&quot;the-true-cost-of-return-stacked-etfs&quot;&gt;The true cost of return stacked ETFs&lt;/h2&gt;

&lt;p&gt;In a &lt;a href=&quot;https://mdickens.me/2021/03/04/true_cost_of_leveraged_etfs/&quot;&gt;previous post&lt;/a&gt;, I looked at how a leveraged index fund &lt;em&gt;should&lt;/em&gt; perform and compared that against how leveraged ETFs actually &lt;em&gt;did&lt;/em&gt; perform. I found that the ETFs consistently cost more than expected, by an average of about one percentage point.&lt;/p&gt;

&lt;p&gt;I attempted to do the same analysis for return stacked ETFs. These ETFs are harder to replicate because they don’t track indexes, so I don’t have high confidence in the results. That said, my numbers suggest that return stacked ETFs are more cost-effective than conventional leveraged ETFs.&lt;/p&gt;

&lt;p&gt;I was able to replicate RSSB and NTSX:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;ETF&lt;/th&gt;
      &lt;th&gt;Leverage&lt;/th&gt;
      &lt;th&gt;Stock ETF(s)&lt;/th&gt;
      &lt;th&gt;Bond Fund(s)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;RSSB&lt;/td&gt;
      &lt;td&gt;100% + 100%&lt;/td&gt;
      &lt;td&gt;&lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vti&quot;&gt;VTI&lt;/a&gt; + &lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vxus&quot;&gt;VXUS&lt;/a&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
      &lt;td&gt;bond futures ladder&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;NTSX&lt;/td&gt;
      &lt;td&gt;90% + 60%&lt;/td&gt;
      &lt;td&gt;SPY (S&amp;amp;P 500)&lt;/td&gt;
      &lt;td&gt;bond futures ladder&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;I also attempted to replicate NTSI, PSLDX, and GDE, but I couldn’t find benchmarks that tracked them well enough.&lt;/p&gt;

&lt;p&gt;I calculated excess costs of each fund as the hypothetical return you’d get if you levered up the benchmark (borrowing at the 3-month T-bill rate), minus the actual historical return of the fund.&lt;/p&gt;

&lt;p&gt;The following table shows the total excess cost and after-fee cost for the return stacked ETFs. Excess cost is shown per 100% leverage (the excess on NTSX is multiplied by two because it only has 50% leverage). After-fee cost gives the excess cost minus the difference expense ratios between the ETF and the benchmark—this represents the “unexpected” portion of the cost, since you expect to pay the expense ratio no matter what. &lt;code&gt;r&lt;/code&gt; gives the correlation between the return stacked ETF and the benchmark. I calculated the average annual cost for each ETF starting from the earliest year for which the ETF had a full year of data.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;ETF&lt;/th&gt;
      &lt;th&gt;Excess Cost&lt;/th&gt;
      &lt;th&gt;After Fee&lt;/th&gt;
      &lt;th&gt;r&lt;/th&gt;
      &lt;th&gt;Start Year&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;RSSB&lt;/td&gt;
      &lt;td&gt;-0.55%&lt;/td&gt;
      &lt;td&gt;-0.84%&lt;/td&gt;
      &lt;td&gt;0.998&lt;/td&gt;
      &lt;td&gt;2024&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;NTSX&lt;/td&gt;
      &lt;td&gt;-0.17%&lt;/td&gt;
      &lt;td&gt;-0.41%&lt;/td&gt;
      &lt;td&gt;0.997&lt;/td&gt;
      &lt;td&gt;2019&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(The costs were negative, which means the real-life funds &lt;em&gt;outperformed&lt;/em&gt; the benchmarks.)&lt;/p&gt;

&lt;p&gt;Excess costs for each individual year for NTSX:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;2019&lt;/th&gt;
      &lt;th&gt;2020&lt;/th&gt;
      &lt;th&gt;2021&lt;/th&gt;
      &lt;th&gt;2022&lt;/th&gt;
      &lt;th&gt;2023&lt;/th&gt;
      &lt;th&gt;2024&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;NTSX&lt;/td&gt;
      &lt;td&gt;-0.48&lt;/td&gt;
      &lt;td&gt;-3.50&lt;/td&gt;
      &lt;td&gt;2.72&lt;/td&gt;
      &lt;td&gt;1.92&lt;/td&gt;
      &lt;td&gt;-0.93&lt;/td&gt;
      &lt;td&gt;-2.15&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;As we can see, excess costs varied quite a bit from year to year. However, they were still generally lower than the &lt;a href=&quot;https://mdickens.me/2021/03/04/true_cost_of_leveraged_etfs/#measuring-the-cost-of-leveraged-etfs&quot;&gt;costs of conventional leveraged ETFs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In fact, the excess costs were &lt;em&gt;negative&lt;/em&gt; most years. That’s surprising, since the benchmark does not account for transaction costs.&lt;/p&gt;

&lt;p&gt;Why were return stacked funds (apparently) more cost-effective than conventional leveraged ETFs?&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;These funds have lower expense ratios. For example, RSSB charges 0.36% and SSO (a 2x leveraged S&amp;amp;P 500 fund) charges 0.89%.&lt;/li&gt;
  &lt;li&gt;Traditional leveraged ETFs rebalance daily. The Return Stacked and WisdomTree ETFs only rebalance if the holdings drift 5 percentage points away from the target weights. Rebalancing has transaction costs, which could be significant or could be close to zero, depending on various factors.&lt;/li&gt;
  &lt;li&gt;The return stacked funds get leverage via Treasury futures, which is approximately the cheapest way to get leverage. Conventional leveraged ETFs primarily use &lt;a href=&quot;https://www.investopedia.com/articles/optioninvestor/07/swaps.asp&quot;&gt;swaps&lt;/a&gt;, which have an opaque pricing structure and might cost a lot more. (I have no idea how much they &lt;em&gt;actually&lt;/em&gt; cost because the pricing is opaque.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Those factors explain why return stacked ETFs are cheaper than 3x leveraged index ETFs. But how is it possible for a return stacked ETF to &lt;em&gt;outperform&lt;/em&gt; a leveraged combination of index funds?&lt;/p&gt;

&lt;p&gt;My benchmarks have some margin of error—they do not perfectly track the return stacked ETFs. Based on playing around with the implementation details of the benchmark, I believe it could be off by perhaps one percentage point.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The most likely source of tracking error is rebalance timing. Small changes in when you rebalance can significantly change year-to-year performance, especially in years like 2024 where some asset classes perform much better than others. If stocks outpaced bonds for most of the year, and the fund was supposed to rebalance from stocks to bonds, then the real-life fund might have gained an edge over the benchmark by delaying rebalancing a little longer.&lt;/p&gt;

&lt;p&gt;Even if these (apparently) negative costs might not persist, this still provides evidence that the return stacked ETFs have lower costs than single-asset leveraged ETFs.&lt;/p&gt;

&lt;h2 id=&quot;pros-and-cons-of-return-stacked-funds&quot;&gt;Pros and cons of return stacked funds&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;They’re a convenient way to get leverage, much more convenient than options or futures.&lt;/li&gt;
  &lt;li&gt;They appear to have lower all-in costs than conventional leveraged ETFs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;None of them offer greater than a 100% allocation to equities.&lt;/li&gt;
  &lt;li&gt;Limited choices—there are only a handful of return stacked ETFs available, and they might not include the asset classes you want.
    &lt;ul&gt;
      &lt;li&gt;I personally would like to see a global stocks + managed futures ETF, but that doesn’t exist. There’s only US stocks + managed futures (RSST) and bonds + managed futures (RSBT).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;As with other leveraged ETFs, the costs of return stacked ETFs fluctuate from year to year. Even though the costs are low on average, in any given year a return stacked ETF might perform worse than expected.&lt;/li&gt;
  &lt;li&gt;I could only determine the costs for two of the return stacked ETFs. The others might have higher costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;are-bonds-a-good-investment&quot;&gt;Are bonds a good investment?&lt;/h2&gt;

&lt;p&gt;Most of the return stacked funds hold bonds. A question that some people ask:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Does it make sense to own return stacked stocks + bonds? Wouldn’t I rather have pure leveraged stocks instead?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Good question! I don’t know!&lt;/p&gt;

&lt;p&gt;An argument against buying bonds:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Right now, the &lt;a href=&quot;https://home.treasury.gov/resource-center/data-chart-center/interest-rates/TextView?type=daily_treasury_yield_curve&amp;amp;field_tdr_date_value_month=202502&quot;&gt;yield curve&lt;/a&gt; is nearly flat: yields on long-term bonds are only slightly higher than on short-term bonds. Why would you borrow at the short-term rate to earn the long-term rate if those rates are (nearly) the same?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Two counter-arguments:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;The efficient market hypothesis predicts that you can’t time the bond market, so you shouldn’t change how you invest based on what the yield curve looks like.&lt;/li&gt;
    &lt;li&gt;A flat or inverted yield curve suggests that short-term rates will go down in the future. You might want to “lock in” the current rate by buying long-term bonds.&lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;(Really these counter-arguments are the same—the (presumed) reason why the yield curve is flat is because the market is pricing in future changes in bond yields.)&lt;/p&gt;

&lt;p&gt;Another argument against bonds:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;In the long run, bonds have only earned a little bit of a premium over short-term T-bills. Given the overhead costs of using leverage, leveraged bonds might have near-zero or even negative expected return.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And two counter-arguments:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;If you can borrow at close to the risk-free rate, bonds should still have a positive long-run premium.&lt;/li&gt;
    &lt;li&gt;Even if leveraged bonds have ~zero expected return, they still add value to a portfolio if they perform well during equity downturns.&lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;Which side of the argument is correct is left as an exercise to the reader.&lt;/p&gt;

&lt;p&gt;If you don’t want to hold bonds, there are some bondless return stacked funds available:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.returnstackedetfs.com/rsst-return-stacked-us-stocks-managed-futures/&quot;&gt;RSST&lt;/a&gt; holds stocks + managed futures (a.k.a. &lt;a href=&quot;https://en.wikipedia.org/wiki/Trend_following&quot;&gt;trendfollowing&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.returnstackedetfs.com/rssy-return-stacked-us-stocks-futures-yield/&quot;&gt;RSSY&lt;/a&gt; holds stocks + futures yield (a.k.a. &lt;a href=&quot;https://en.wikipedia.org/wiki/Carry_(investment)&quot;&gt;carry&lt;/a&gt; or &lt;a href=&quot;https://en.wikipedia.org/wiki/Roll_yield&quot;&gt;roll yield&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://etfdb.com/etf/BTGD&quot;&gt;BTGD&lt;/a&gt; holds bitcoin + gold.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.wisdomtree.com/investments/etfs/capital-efficient/GDE&quot;&gt;GDE&lt;/a&gt; holds US stocks + gold.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.wisdomtree.com/investments/etfs/capital-efficient/GDMN&quot;&gt;GDMN&lt;/a&gt; holds gold miner stocks + gold.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I am a big fan of managed futures trendfollowing—it’s a strategy with strong historical performance that provided protection during market downturns, and I think it’s likely to continue working in the future (for more, see &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.2993026&quot;&gt;Hurst et al. (2017), “A Century of Evidence on Trend-Following Investing”&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;). I’m ambivalent about carry (I’ve heard good arguments both for and against using it). I personally wouldn’t invest in bitcoin or gold, but if that’s your thing, return stacked ETFs give you a way to do it.&lt;/p&gt;

&lt;p&gt;(I don’t own RSST, but I hold something similar in my own portfolio—equities (&lt;a href=&quot;https://funds.alphaarchitect.com/aavm/&quot;&gt;AAVM&lt;/a&gt;) with managed futures stacked on top.)&lt;/p&gt;

&lt;h2 id=&quot;source-code&quot;&gt;Source code&lt;/h2&gt;

&lt;p&gt;Source code is available &lt;a href=&quot;https://github.com/michaeldickens/leveraged-etfs&quot;&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;acknowledgments&quot;&gt;Acknowledgments&lt;/h2&gt;

&lt;p&gt;Thanks to Corey Hoffstein for helping me work out the implementation details of my benchmark.&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;“bonds” means any sort of bonds, including Treasury or corporate bonds. “Treasury bonds” means just Treasury bonds. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I weighted VTI at 62% and VXUS at 38% as of the beginning of 2024 because those are the weightings I get if I reverse-engineer from RSSB’s current weightings.&lt;/p&gt;

      &lt;p&gt;It would be simpler to use &lt;a href=&quot;https://etfdb.com/etf/VT/&quot;&gt;VT&lt;/a&gt; which includes all the same stocks as VTI + VXUS. But RSSB itself holds VTI + VXUS, and I found that breaking out equities into two separate ETFs produces slightly more accurate benchmark. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;RSSB gets exposure to bonds via an equal-weighted combination of bond futures at four maturities: 2-year, 5-year, 10-year, and long (i.e. 25- to 30-year). I replicated this using:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;25% S&amp;amp;P 2-Year U.S. Treasury Note Futures Total Return Index&lt;/li&gt;
        &lt;li&gt;25% S&amp;amp;P 5-Year U.S. Treasury Note Futures Total Return Index&lt;/li&gt;
        &lt;li&gt;25% S&amp;amp;P 10-Year U.S. Treasury Note Futures Total Return Index&lt;/li&gt;
        &lt;li&gt;25% S&amp;amp;P Ultra T-Bond Futures Total Return Index&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;These indexes should exactly match the bond futures that RSSB holds. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I wasn’t entirely sure what position to use to replicate NTSX’s bond holdings. Its &lt;a href=&quot;https://www.wisdomtree.com/investments/-/media/us-media-files/documents/resource-library/investment-case/the-case-for-the-efficient-core-fund-family.pdf&quot;&gt;materials&lt;/a&gt; include illustrative figures that use a 7-10 year Treasury index as a benchmark, which suggests I should use &lt;a href=&quot;https://etfdb.com/etf/IEF/&quot;&gt;IEF&lt;/a&gt; or perhaps 10-year Treasury futures. But the latest &lt;a href=&quot;https://www.wisdomtree.com/investments/-/media/us-media-files/documents/resource-library/fund-reports-schedules/statistics/wisdomtree-fi-export-statistics-ntsx.pdf&quot;&gt;holdings&lt;/a&gt; show that it uses a combination of bond futures of different durations.&lt;/p&gt;

      &lt;p&gt;I found the best correlation to NTSX when using a weighted combination of four Treasury futures:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;12% S&amp;amp;P 2-Year U.S. Treasury Note Futures Total Return Index&lt;/li&gt;
        &lt;li&gt;12% S&amp;amp;P 5-Year U.S. Treasury Note Futures Total Return Index&lt;/li&gt;
        &lt;li&gt;24% S&amp;amp;P 10-Year U.S. Treasury Note Futures Total Return Index&lt;/li&gt;
        &lt;li&gt;12% S&amp;amp;P Ultra T-Bond Futures Total Return Index&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;As of this writing, NTSX holds 12% in 10-Year U.S. Treasury Note Futures and 12% in Ultra 10-Year U.S. Treasury Note Futures (which are like the normal 10-year futures except that they are &lt;a href=&quot;https://www.cmegroup.com/markets/interest-rates/us-treasury/ultra-10-year-us-treasury-note.html&quot;&gt;more closely tied&lt;/a&gt; to a 10-year maturity). I did not use Ultra futures in my benchmark because they only launched a few years ago. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some minor changes that affect the return of the benchmark:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;The funds rebalance whenever weights drift 5% away from the target. But the prospectuses for RSSB and NTSX were not clear about what exactly that meant—I can think of at least four different interpretations. Corey Hoffstein (who co-runs RSSB) explained to me exactly how the rebalancing works, and I assumed NTSX works the same way but I don’t know for sure. Different rebalancing methods can change the average return by as much as one percentage point—a fund might get lucky and rebalance into a position right before it rockets up, or the opposite might happen.&lt;/li&gt;
        &lt;li&gt;Changing how the benchmark invests in bonds can change the return. &lt;a href=&quot;https://etfdb.com/etf/GOVT/&quot;&gt;GOVT&lt;/a&gt; outperformed the Treasury futures ladder over the sample period. Changing the NTSX benchmark to use GOVT increased its return by 24 &lt;a href=&quot;https://en.wikipedia.org/wiki/Basis_point&quot;&gt;bps&lt;/a&gt; (but also decreased the correlation to NTSX from 0.997 to 0.993).&lt;/li&gt;
        &lt;li&gt;My program might have a bug. Shortly before posting this article, I discovered that I was incorrectly calculating how much cash the benchmark needed to borrow and thus overestimating interest payments by about 40 bps per year.&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hurst, B., Ooi, Y. H., &amp;amp; Pedersen, L. H. (2017). &lt;a href=&quot;https://dx.doi.org/10.2139/ssrn.2993026&quot;&gt;A Century of Evidence on Trend-Following Investing.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Originally I couldn’t figure out how to get my benchmark’s correlation to RSSB higher than 0.976. Turns out I needed to compare the benchmark to RSSB’s NAV, not its daily closing price. Corey explained to me that NAV and price diverge because daily futures prices settle at 3pm but exchanges close at 4pm, so any market movements in that hour will show up in RSSB’s price but not in its NAV or in the benchmark.&lt;/p&gt;

      &lt;p&gt;I had the same problem with my NTSX benchmark and was able to fix it the same way.&lt;/p&gt;

      &lt;p&gt;I also originally implemented rebalancing using an incorrect method, and Corey clarified the correct method to use. (This did not improve the correlation.) &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>I Was Probably Wrong About HIIT and VO2max</title>
				<pubDate>Mon, 03 Feb 2025 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2025/02/03/I_was_probably_wrong_about_HIIT_and_VO2max/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/02/03/I_was_probably_wrong_about_HIIT_and_VO2max/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;This research piece is not as rigorous or polished as usual. I wrote it quickly in a stream-of-consciousness style, which means it’s more reflective of my actual reasoning process.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;My understanding of HIIT (high-intensity interval training) as of a week ago:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;VO2max is the best fitness indicator for predicting health and longevity.&lt;/li&gt;
  &lt;li&gt;HIIT, especially long-duration intervals (4+ minutes), is the best way to improve VO2max.&lt;/li&gt;
  &lt;li&gt;Intervals should be done at the maximum sustainable intensity.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I now believe those are all probably wrong.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;I think I got the wrong idea because a lot of HIIT/VO2max promoters cite scientific studies, which makes them seem superficially reasonable, but the studies they cite aren’t very good, or aren’t interpreted correctly.&lt;/p&gt;

&lt;p&gt;A few months ago I started incorporating some HIIT into my cardio routine. But I didn’t really know the best way to do it, so last week I decided to do some research. I looked up the most-cited meta-analyses on Google Scholar and I noticed my confusion when the meta-analyses didn’t seem to support the conventional wisdom that HIIT is the best way to improve VO2max.&lt;/p&gt;

&lt;p&gt;The most comprehensive single source I found was a meta-meta-analysis by Crowley et al. (2022)&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, which reviewed the findings of meta-analyses on HIT (high-intensity training) vs LIT (low-intensity training) for VO2max. The key quote:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Evidence from the meta-analyses that directly compared LIT versus HIT protocols on VO2max was, ostensibly, reported as either trivial or inconclusive. Three out of the six included meta-analyses reported small/moderate beneficial effects of HIT over LIT (α &amp;lt; 0.05). However, two of these reviews reported “substantial” heterogeneity (I2&amp;gt;0.75), small-study bias (p &amp;lt; 0.10), a relatively small pooled sample size (i.e., &amp;lt;1,000 participants), had a high degree of overlap (CCA = 11%) and reported several moderators (e.g., baseline fitness levels, age, HIT variables [e.g., volume, frequency, and duration]), which likely affected results.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Also, in my naiveté I had assumed that these were meta-analyses of RCTs, but in fact most of the included studies weren’t even RCTs:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Scribbans et al. reported that none of their included studies applied RCTs, Sloth et al. reported only four studies that applied RCTs design, and Gist et al. reported that the majority of included studies were RCTs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(Note: Gist et al.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, which did look mainly at RCTs, found that sprint interval training did not work better than endurance training (Cohen’s d = 0.04, 95% CI = -0.17 to 0.24.)&lt;/p&gt;

&lt;p&gt;So I thought, okay, these meta-analyses don’t seem to favor HIIT much if at all. But maybe they’re done by stuffy academics who don’t know anything about real training. Are these meta-analyses considered respectable? So I went to see what the Barbell Medicine&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; guys thought. I have a lot of respect for them when it comes to strength training. I don’t know if they know about cardio, but one of them is a former competitive swimmer so probably they know &lt;em&gt;something&lt;/em&gt;. And they have good epistemics on strength training, and good epistemics might generalize. They did a podcast&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; on HIIT with some useful content:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;They started the podcast by criticizing a tweet&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; in which fitness influencer Rhonda Patrick recommended a collection of “evidence-based HIIT protocols”. Their criticism mainly focused on how (they claimed) the provided HIIT protocols were way too hard. They quoted two responses&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; by exercise physiologists arguing the same.&lt;/li&gt;
  &lt;li&gt;They went on to talk about some of the research on HIIT, citing the same meta-analyses that I’d looked at.&lt;/li&gt;
  &lt;li&gt;Their ultimate recommendation (also given in an article&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;): “it is reasonable for about 80% of training to be of moderate intensity (zones 1-2), and about 20% reserved for higher intensity work (HIIT or SIT)”. They also said it’s good to use a variety of HIIT protocols and that there is no single optimal protocol.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I didn’t know if the Barbell Medicine guys were right about any of that, but it gave me some direction.&lt;/p&gt;

&lt;p&gt;I had a look at the website of one of the people from that Twitter thread, Steve Magness&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;. He ran a 4:01 mile in high school so he probably has some idea of what he’s talking about.&lt;/p&gt;

&lt;p&gt;Now, when some science-literate fitness influencers like Peter Attia and Rhonda Patrick give some recommendations about HIIT, and some other people like Steve Magness and Barbell Medicine disagree with them, I don’t have sufficient expertise to say who’s right. Both sides have the trappings of scientific credibility (e.g. citing multiple studies). But one thing I &lt;em&gt;can&lt;/em&gt; do is check their logic.&lt;/p&gt;

&lt;p&gt;So I checked Steve Magness’s logic. He wrote in a Twitter thread:&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;You need all intensities to max VO2max. So it’s dumb to pit one vs. other&lt;/p&gt;

  &lt;p&gt;But research shows continuous likely matches HIIT for Vo2max increase&lt;/p&gt;

  &lt;p&gt;HIIT appears better when you constrain to 8 weeks but when you look over longer time it equalizes&lt;/p&gt;

  &lt;p&gt;Here’s data from a recent review.&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

  &lt;p&gt;You can see HIIT appears to increase VO2max more because of the time frame of most training studies (6-8 weeks).&lt;/p&gt;

  &lt;p&gt;Intense work gets big boost, then levels off. Endurance work gives Longer more gradual boost.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;img src=&quot;https://mdickens.me/assets/images/Molmen-2024.jpeg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The referenced review does seem to support Magness’s argument, but I don’t know if the review is any good. What I do know is that Magness’s logic makes sense. It stands to reason that a more intense exercise protocol will cause faster short-term gains, but it can’t keep producing those rapid gains forever. And it stands to reason that if most studies on HIIT vs. LIT only last 6-12 weeks, then they will underestimate the long-term benefits of LIT.&lt;/p&gt;

&lt;p&gt;That makes logical sense to me, which makes me think Steve Magness knows what he’s talking about.&lt;/p&gt;

&lt;p&gt;He also wrote an article&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; arguing that some people care too much about VO2max for longevity:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Vo2max matters. But it’s just one component of many that make up both performance and aerobic fitness. And that’s important because if we return to the original claims that Vo2max is the key indicator of longevity, we’ll find that the majority of the studies cited did NOT even use Vo2max as the main variable. They used performance! In the majority of research, peak speed and incline during the exhausting test was the main correlate to longevity.&lt;/p&gt;

  &lt;p&gt;The large study&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; on 750,000 veterans that found a 4-fold higher mortality risk for low versus high fitness used peak speed and incline, not Vo2max. Same with the research&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; on 120,000 individuals finding a 5x difference in the risk of early death.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That makes logical sense to me. VO2max is only one aspect of fitness (albeit an important one), and it stands to reason that your actual ability to perform physical tasks is a better measure of physical health.&lt;/p&gt;

&lt;p&gt;I did also look at some of the evidence cited in that article, specifically the Harber et al. (2017)&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt; meta-analysis, and Table 2 confirms Magness’s claim—most studies measured speed, time, or total work performed, not VO2max directly.&lt;/p&gt;

&lt;p&gt;Insofar as I can verify Steve Magness’s claims, they seem to be correct. He also claims that HIIT protocols should not be “all-out”—for example, 60-second running intervals should be done between a 5K and a one-mile pace.&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; He doesn’t cite any research on that claim, and as far as I know, there &lt;em&gt;isn’t&lt;/em&gt; really research on it, it’s just how most high-performing athletes train. But since he seems right about the verifiable claims he’s made, I expect he’s right about that, too.&lt;/p&gt;

&lt;p&gt;On the subject of checking people’s logic, here is an (admittedly cherry-picked) quote from the other side of the argument:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[I]s anything done in a total of 10-minutes that big of a deal?&lt;/p&gt;

  &lt;p&gt;If anyone is misinterpreting my statement as prescriptive:&lt;/p&gt;

  &lt;p&gt;My underlying point was that anything you can do in 10-minutes is limited on a relative harm basis, even if you do a lot.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This does not make logical sense to me. You can absolutely hurt yourself in less than 10 minutes. Even putting injury risk aside,&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I’ve done 10-minute hill sprint intervals in the morning that left me feeling tired all day.&lt;/li&gt;
  &lt;li&gt;A 10-rep max squat takes less than 60 seconds, but it makes my legs sore for the next two days.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(Maybe that’s less logic and more personal experience, but a single example is enough to disprove a universal claim.)&lt;/p&gt;

&lt;p&gt;I think Barbell Medicine has good logic, too. In their podcast on HIIT, they talk about how in strength training, nobody lifts the maximum possible weight every week. (And I have enough personal experience to know that maxing out every week wouldn’t work.) So it probably doesn’t make sense to max out your aerobic capacity in every week, either. I’m not sure strength training and cardio work the same way in that respect, but I expect things to be the same unless I have reason to believe they’re different.&lt;/p&gt;

&lt;p&gt;The Barbell Medicine article on HIIT&lt;sup id=&quot;fnref:8:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; has some nice sample workouts that line up with Magness’s recommendations:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;strong&gt;Workout #1&lt;/strong&gt;&lt;/p&gt;

  &lt;p&gt;4 to 6 rounds of: 30 seconds on at 600-800 m running pace (or a speed sustainable in the range of ~90-150 seconds), 4 min off / easy effort&lt;/p&gt;

  &lt;p&gt;&lt;strong&gt;Workout #2&lt;/strong&gt;&lt;/p&gt;

  &lt;p&gt;8 to 10 rounds of: 1-minute on at 1 mile-5 km running pace (or a speed sustainable in the range of 6-25 minutes), 1 minute off&lt;/p&gt;

  &lt;p&gt;&lt;strong&gt;Workout #3&lt;/strong&gt;&lt;/p&gt;

  &lt;p&gt;3 to 5 rounds of: 5 minutes on at zone 4 heart rate (85-95% max), 3 min rest&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I want to investigate this further, but here’s what I tentatively believe:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;VO2max predicts longevity, but athletic performance matters more than VO2max alone.&lt;/li&gt;
  &lt;li&gt;I should exercise at a variety of intensities to get a well-rounded fitness capacity, but HIIT isn’t particularly better for improving fitness than LIT, and 4-minute intervals aren’t particularly better than other interval schemes.&lt;/li&gt;
  &lt;li&gt;Intervals should &lt;em&gt;not&lt;/em&gt; be done at the maximum sustainable intensity; they should be done at an intensity that’s challenging but doesn’t leave you wiped out. As Magness wrote, “The goal isn’t to create fatigue, that’s easy to do. The goal is to slightly embarrass your body in the right direction.”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Coming back to my own training: I have loathed every version of HIIT I’ve tried so far. But that’s because I listened to the people saying that HIIT should be “all-out.” Next time I’m going to do HIIT at a more comfortable pace.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Crowley, E., Powell, C., Carson, B. P., &amp;amp; W. Davies, R. (2022). &lt;a href=&quot;https://doi.org/10.1155/2022/9310710&quot;&gt;The Effect of Exercise Training Intensity on VO2max in Healthy Adults: An Overview of Systematic Reviews and Meta-Analyses.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Gist, N. H., Fedewa, M. V., Dishman, R. K., &amp;amp; Cureton, K. J. (2013). &lt;a href=&quot;https://doi.org/10.1007/s40279-013-0115-0&quot;&gt;Sprint Interval Training Effects on Aerobic Capacity: A Systematic Review and Meta-Analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://www.barbellmedicine.com/&quot;&gt;https://www.barbellmedicine.com/&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://redcircle.com/shows/0cc66fc4-ccb8-4c60-8cc6-7367e52c4159/episodes/706bc687-0a98-4057-8e2e-e4db349bba4a&quot;&gt;https://redcircle.com/shows/0cc66fc4-ccb8-4c60-8cc6-7367e52c4159/episodes/706bc687-0a98-4057-8e2e-e4db349bba4a&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://x.com/foundmyfitness/status/1844811732080021919&quot;&gt;https://x.com/foundmyfitness/status/1844811732080021919&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://x.com/StephenSeiler/status/1845357464130031873&quot;&gt;https://x.com/StephenSeiler/status/1845357464130031873&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://x.com/stevemagness/status/1845079291320525202&quot;&gt;https://x.com/stevemagness/status/1845079291320525202&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://www.barbellmedicine.com/blog/hiit-high-intensity-interval-training/&quot;&gt;https://www.barbellmedicine.com/blog/hiit-high-intensity-interval-training/&lt;/a&gt; &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:8:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://www.stevemagness.com/about/&quot;&gt;https://www.stevemagness.com/about/&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://x.com/stevemagness/status/1849918347086795151&quot;&gt;https://x.com/stevemagness/status/1849918347086795151&lt;/a&gt; &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mølmen, K. S., Almquist, N. W., &amp;amp; Skattebo, Ø. (2024). &lt;a href=&quot;https://link.springer.com/article/10.1007/s40279-024-02120-2&quot;&gt;Effects of Exercise Training on Mitochondrial and Capillary Growth in Human Skeletal Muscle: A Systematic Review and Meta-Regression.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1007/s40279-024-02120-2&quot;&gt;10.1007/s40279-024-02120-2&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://thegrowtheq.com/longevity-and-vo2max-does-it-matter/&quot;&gt;https://thegrowtheq.com/longevity-and-vo2max-does-it-matter/&lt;/a&gt; &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kokkinos, P., Faselis, C., Samuel, I. B. H., Pittaras, A., Doumas, M., Murphy, R., Heimall, M. S. et al. (2022). &lt;a href=&quot;https://doi.org/10.1016/j.jacc.2022.05.031&quot;&gt;Cardiorespiratory Fitness and Mortality Risk Across the Spectra of Age, Race, and Sex.&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mandsager, K., Harb, S., Cremer, P., Phelan, D., Nissen, S. E., &amp;amp; Jaber, W. (2018). &lt;a href=&quot;https://doi.org/10.1001/jamanetworkopen.2018.3605&quot;&gt;Association of Cardiorespiratory Fitness With Long-term Mortality Among Adults Undergoing Exercise Treadmill Testing.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Harber, M. P., Kaminsky, L. A., Arena, R., Blair, S. N., Franklin, B. A., Myers, J., &amp;amp; Ross, R. (2017). &lt;a href=&quot;https://mdickens.me/materials/harber2017.pdf&quot;&gt;Impact of Cardiorespiratory Fitness on All-Cause and Disease-Specific Mortality: Advances Since 2009.&lt;/a&gt; doi: &lt;a href=&quot;https://doi.org/10.1016/j.pcad.2017.03.001&quot;&gt;10.1016/j.pcad.2017.03.001&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://x.com/stevemagness/status/1845079292784365803&quot;&gt;https://x.com/stevemagness/status/1845079292784365803&lt;/a&gt; &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Retroactive If-Then Commitments</title>
				<pubDate>Sat, 01 Feb 2025 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2025/02/01/retroactive_if-then_commitments/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/02/01/retroactive_if-then_commitments/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;An &lt;a href=&quot;https://www.lesswrong.com/posts/sMtS9Eof6QC6sPouB/if-then-commitments-for-ai-risk-reduction-by-holden&quot;&gt;if-then commitment&lt;/a&gt; is a framework for responding to AI risk: “If an AI model has capability X, then AI development/deployment must be halted until mitigations Y are put in place.”&lt;/p&gt;

&lt;p&gt;As an extension of this approach, we should consider &lt;strong&gt;retroactive if-then commitments&lt;/strong&gt;. We should behave &lt;em&gt;as if&lt;/em&gt; we wrote if-then commitments a few years ago, and we should commit to implementing whatever mitigations we &lt;em&gt;would have&lt;/em&gt; committed to back then.&lt;/p&gt;

&lt;p&gt;Imagine how an if-then commitment might have been written in 2020:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Pause AI development and figure out mitigations if:&lt;/p&gt;

  &lt;ul&gt;
    &lt;li&gt;AI exhibits what looks like deceptive or &lt;a href=&quot;https://www.lesswrong.com/posts/8gy7c8GAPkuu6wTiX/frontier-models-are-capable-of-in-context-scheming&quot;&gt;misaligned&lt;/a&gt; behavior, or feigns alignment (&lt;a href=&quot;https://assets.anthropic.com/m/983c85a201a962f/original/Alignment-Faking-in-Large-Language-Models-full-paper.pdf&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;https://www.lesswrong.com/posts/njAZwT8nkHnjipJku/alignment-faking-in-large-language-models?commentId=uXBf8XwDyryXYiTRu&quot;&gt;1b&lt;/a&gt;, &lt;a href=&quot;https://www.transformernews.ai/p/openai-o1-alignment-faking&quot;&gt;2&lt;/a&gt;)&lt;/li&gt;
    &lt;li&gt;AI &lt;a href=&quot;https://www.zmescience.com/science/news-science/chat-gpt-escaped-containment/&quot;&gt;breaks out of containment&lt;/a&gt; in a toy example&lt;/li&gt;
    &lt;li&gt;AI &lt;a href=&quot;https://www.forbes.com/sites/daveywinder/2024/11/05/google-claims-world-first-as-ai-finds-0-day-security-vulnerability/&quot;&gt;finds a real-world zero-day vulnerability&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;AI &lt;a href=&quot;https://www.metaculus.com/questions/3698/when-will-an-ai-achieve-a-98th-percentile-score-or-higher-in-a-mensa-admission-test/&quot;&gt;qualifies for Mensa&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
    &lt;li&gt;AI exhibits some degree of &lt;a href=&quot;https://www.anthropic.com/news/3-5-models-and-computer-use&quot;&gt;agentic capabilities&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;AI &lt;a href=&quot;https://x.com/elder_plinius/status/1858177213201367478&quot;&gt;writes malware&lt;/a&gt;&lt;/li&gt;
  &lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Well, AI models have now done or nearly-done all of those things.&lt;/p&gt;

&lt;p&gt;We don’t know what mitigations are appropriate, so AI companies should pause development until (at a minimum) AI safety researchers agree on what mitigations are warranted, and those mitigations are then fully implemented.&lt;/p&gt;

&lt;p&gt;(You could argue about whether AI &lt;em&gt;really&lt;/em&gt; hit those capability milestones, but that doesn’t particularly matter. You need to pause and/or restrict development of an AI system when it looks &lt;em&gt;potentially&lt;/em&gt; dangerous, not &lt;em&gt;definitely&lt;/em&gt; dangerous.)&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Okay, technically it did not score well enough to qualify, but it scored well enough that there was some ambiguity about whether it qualified, which is only a little bit less concerning. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>The 7 Best High-Protein Breakfast Cereals</title>
				<pubDate>Fri, 17 Jan 2025 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2025/01/17/high_protein_breakfast_cereals/</link>
				<guid isPermaLink="true">http://mdickens.me/2025/01/17/high_protein_breakfast_cereals/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Updated 2025-03-19 to add Catalina Crunch Cinnamon Toast.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;(I write listicles now)&lt;/p&gt;

&lt;p&gt;(there are only 7 eligible high-protein breakfast cereals, so the ones at the bottom are still technically among the 7 best even though they’re not good)&lt;/p&gt;

&lt;p&gt;If you search the internet, you can find rankings of the best “high-protein” breakfast cereals. But most of the entries on those lists don’t even have that much protein. I don’t like that, so I made my own list.&lt;/p&gt;

&lt;p&gt;This is my ranking of genuinely high-protein breakfast cereals, which I define as containing at least 25% calories from protein.&lt;/p&gt;

&lt;p&gt;Many food products like to advertise how many grams of protein they have per serving. That number doesn’t matter because it depends on how big a serving is. Hypothetically, if a food had 6g protein per serving but each serving contained 2000 calories, that would be a terrible deal. The actual number that matters is the &lt;em&gt;proportion&lt;/em&gt; of calories from protein.&lt;/p&gt;

&lt;p&gt;My ranking only includes vegan cereals because I’m vegan. Fortunately most cereals are vegan anyway. The main exception is that some cereals contain whey protein, but that’s not too common—most of them use soy, pea, or wheat protein instead.&lt;/p&gt;

&lt;h2 id=&quot;high-protein-cereals-ranked-by-flavor&quot;&gt;High-protein cereals, ranked by flavor&lt;/h2&gt;

&lt;!-- more --&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#high-protein-cereals-ranked-by-flavor&quot; id=&quot;markdown-toc-high-protein-cereals-ranked-by-flavor&quot;&gt;High-protein cereals, ranked by flavor&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#1-oatmeal-with-added-protein-powder-27-calories-from-protein-if-you-make-it-the-way-i-do&quot; id=&quot;markdown-toc-1-oatmeal-with-added-protein-powder-27-calories-from-protein-if-you-make-it-the-way-i-do&quot;&gt;1. &lt;strong&gt;Oatmeal with added protein powder&lt;/strong&gt; (27% calories from protein, if you make it the way I do)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#2-special-k-high-protein-chocolate-almond-33-calories-from-protein&quot; id=&quot;markdown-toc-2-special-k-high-protein-chocolate-almond-33-calories-from-protein&quot;&gt;2. &lt;strong&gt;Special K High Protein Chocolate Almond&lt;/strong&gt; (33% calories from protein)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#3-catalina-crunch-40-calories-from-protein&quot; id=&quot;markdown-toc-3-catalina-crunch-40-calories-from-protein&quot;&gt;3. &lt;strong&gt;Catalina Crunch&lt;/strong&gt; (40% calories from protein)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#4-wheaties-protein-supplemented-with-lysine-32-calories-from-protein&quot; id=&quot;markdown-toc-4-wheaties-protein-supplemented-with-lysine-32-calories-from-protein&quot;&gt;4. &lt;strong&gt;Wheaties Protein&lt;/strong&gt; (supplemented with lysine) (32% calories from protein)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#5-post-premier-protein-44-calories-from-protein&quot; id=&quot;markdown-toc-5-post-premier-protein-44-calories-from-protein&quot;&gt;5. &lt;strong&gt;Post Premier Protein&lt;/strong&gt; (44% calories from protein)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#6-special-k-zero-50-calories-from-protein&quot; id=&quot;markdown-toc-6-special-k-zero-50-calories-from-protein&quot;&gt;6. &lt;strong&gt;Special K Zero&lt;/strong&gt; (50% calories from protein)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#7-three-wishes-24-to-27-calories-from-protein-depending-on-flavor&quot; id=&quot;markdown-toc-7-three-wishes-24-to-27-calories-from-protein-depending-on-flavor&quot;&gt;7. &lt;strong&gt;Three Wishes&lt;/strong&gt; (24% to 27% calories from protein, depending on flavor)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#honorable-mention-kashi-go&quot; id=&quot;markdown-toc-honorable-mention-kashi-go&quot;&gt;Honorable mention: Kashi Go&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#unranked-because-its-not-vegan-magic-spoon-33-to-37-calories-from-protein-depending-on-flavor&quot; id=&quot;markdown-toc-unranked-because-its-not-vegan-magic-spoon-33-to-37-calories-from-protein-depending-on-flavor&quot;&gt;Unranked because it’s not vegan: &lt;strong&gt;Magic Spoon&lt;/strong&gt; (33% to 37% calories from protein, depending on flavor)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#some-other-non-vegan-cereals-that-i-know-nothing-about&quot; id=&quot;markdown-toc-some-other-non-vegan-cereals-that-i-know-nothing-about&quot;&gt;Some other non-vegan cereals that I know nothing about&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#price-table&quot; id=&quot;markdown-toc-price-table&quot;&gt;Price table&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;1-oatmeal-with-added-protein-powder-27-calories-from-protein-if-you-make-it-the-way-i-do&quot;&gt;1. &lt;strong&gt;Oatmeal with added protein powder&lt;/strong&gt; (27% calories from protein, if you make it the way I do)&lt;/h3&gt;

&lt;figure&gt;
&lt;img src=&quot;/assets/images/Oatmeal.jpg&quot; style=&quot;height:300px&quot; /&gt;
&lt;figcaption style=&quot;font-size: 0.7em&quot;&gt;This is regular oatmeal because I couldn&apos;t find a stock photo of oatmeal mixed with protein powder.&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;You can buy pre-mixed oatmeal and protein powder, but it’s unnecessarily expensive so I prefer to mix it myself. Obviously the amount of protein varies depending on how much protein powder you add. I personally like to mix:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;two servings of oats (= one cup, or 300 calories)&lt;/li&gt;
  &lt;li&gt;one scoop of protein powder (= 1/3 cup, or 25 grams, or 90 calories)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Those proportions provide 30% calories from protein. I find the consistency gets sticky if you add more protein than that. If you don’t like the consistency at this ratio, you can add more oats/less protein.&lt;/p&gt;

&lt;p&gt;I save time by mixing the oats and protein powder in a giant jug. The jug lasts me for a few weeks and this way I don’t have to measure out the proportions every morning.&lt;/p&gt;

&lt;p&gt;Oatmeal is my favorite cereal because there are so many ways to make it. I like to mix in blueberries, blackberries, or bananas, which add nutrients and cover up the protein-y flavor. (Plain oatmeal with protein powder tastes kind of weird.)&lt;/p&gt;

&lt;p&gt;This mixture has 30% calories from protein without fruit and about 27% with fruit.&lt;/p&gt;

&lt;h3 id=&quot;2-special-k-high-protein-chocolate-almond-33-calories-from-protein&quot;&gt;2. &lt;strong&gt;Special K High Protein Chocolate Almond&lt;/strong&gt; (33% calories from protein)&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/SKHP.png&quot; style=&quot;height:300px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This is my favorite cold breakfast cereal. Most high-protein cereals taste merely tolerable, but this one tastes actively &lt;em&gt;good&lt;/em&gt;. It has a nice crunchy texture from the combination of almonds and cereal flakes, and it has enough sugar to give it a good flavor.&lt;/p&gt;

&lt;p&gt;Unfortunately it doesn’t seem to be available anymore. I reached out to customer service to ask if it’s discontinued and they said they’re still producing it, but I haven’t been able to find it anywhere. (Nobody else seems to be able to find it either—see the product reviews &lt;a href=&quot;https://www.kroger.com/p/kellogg-s-special-k-high-protein-chocolate-almond-cereal/0003800028203&quot;&gt;here&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;(Special K High Protein should not be confused with Special K Protein, which did not qualify for my list because it has less than 25% calories from protein.)&lt;/p&gt;

&lt;h3 id=&quot;3-catalina-crunch-40-calories-from-protein&quot;&gt;3. &lt;strong&gt;Catalina Crunch&lt;/strong&gt; (40% calories from protein)&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Catalina-Crunch-cinnamon.webp&quot; style=&quot;height:300px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This is my 3rd favorite high-protein cereal, and it has more protein than #1 or #2. Catalina Crunch has become a staple for me now that Special K High Protein is apparently discontinued.&lt;/p&gt;

&lt;p&gt;There are a number of flavors, but the only ones I’ve tried are Cinnamon Toast and Dark Chocolate. Originally I had Dark Chocolate on this list at #4, but later I tried Cinnamon Toast which is better, so I’ve moved Catalina Crunch up to rank 3.&lt;/p&gt;

&lt;p&gt;It doesn’t get soggy in milk and it doesn’t fall apart in your mouth. It’s sugar-free, but it tastes surprisingly good to me (I don’t usually like sugar-free cereals). The downside is it’s pretty expensive compared to most breakfast cereals.&lt;/p&gt;

&lt;h3 id=&quot;4-wheaties-protein-supplemented-with-lysine-32-calories-from-protein&quot;&gt;4. &lt;strong&gt;Wheaties Protein&lt;/strong&gt; (supplemented with lysine) (32% calories from protein)&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Wheaties-Protein.webp&quot; style=&quot;height:300px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Wheaties Protein is the last high-protein cereal I’d consider &lt;em&gt;good&lt;/em&gt;, like I actively enjoy eating it.&lt;/p&gt;

&lt;p&gt;Be aware that this cereal’s protein comes primarily from wheat, which doesn’t have much of the amino acid lysine. It has plenty of every other essential amino acid, but if you eat a lot of this cereal, you need to make sure you get lysine from somewhere else.&lt;/p&gt;

&lt;p&gt;To get a full amino acid profile, you need to add about 1 gram of lysine per 60 grams of wheat protein. I personally buy &lt;a href=&quot;https://www.amazon.com/NOW-L-Lysine-500-100-Tablets/dp/B000MGOWOC&quot;&gt;these lysine supplements&lt;/a&gt; and take one 500mg pill per bowl of cereal.&lt;/p&gt;

&lt;h3 id=&quot;5-post-premier-protein-44-calories-from-protein&quot;&gt;5. &lt;strong&gt;Post Premier Protein&lt;/strong&gt; (44% calories from protein)&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Post-Premier-Protein-Chocolate-Almond.jpg&quot; style=&quot;height:300px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This cereal has two different flavors, Chocolate Almond and Mixed Berry Almond. I personally like the chocolate flavor better. Both have a passably good flavor and texture but they have a weird protein-y crunchiness.&lt;/p&gt;

&lt;p&gt;Post Premier Protein gets most of its protein from wheat, which, as I mentioned before, doesn’t contain much lysine. Fortunately it also contains pea protein, which has an abundance of lysine.&lt;/p&gt;

&lt;p&gt;I don’t know the exact protein ratios, but I believe the overall balance still doesn’t contain enough lysine, so it would be prudent to get some extra lysine from somewhere else. I personally would take one 500mg lysine pill per two bowls of cereal. (I always eat two bowls for breakfast because I’m a growing boy.)&lt;/p&gt;

&lt;h3 id=&quot;6-special-k-zero-50-calories-from-protein&quot;&gt;6. &lt;strong&gt;Special K Zero&lt;/strong&gt; (50% calories from protein)&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Special-K-Zero.webp&quot; style=&quot;height:300px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;If you had to guess which of Special K High Protein and Special K Zero had more protein, you might think it’s the one with “protein” in the name, but you’d be wrong. This cereal contains an extraordinary 50% calories from protein with a good amino acid composition. Unfortunately, the reason it has so much protein is that it’s basically just lumps of protein powder.&lt;/p&gt;

&lt;p&gt;When I take a bite of this cereal, it tastes good for the first five seconds or so. Then it dissolves into a powdery mush—it feels like I’ve poured wet protein powder directly into my mouth. I wouldn’t eat Special K Zero unless I was really desperate for protein. (Even then, I would rather have a protein shake.)&lt;/p&gt;

&lt;p&gt;But some people seem to like it so your mileage may vary.&lt;/p&gt;

&lt;h3 id=&quot;7-three-wishes-24-to-27-calories-from-protein-depending-on-flavor&quot;&gt;7. &lt;strong&gt;Three Wishes&lt;/strong&gt; (24% to 27% calories from protein, depending on flavor)&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Three-Wishes.webp&quot; style=&quot;height:300px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Like Special K Zero, Three Wishes feels like eating protein powder flakes. But it only has half as much protein as Special K Zero. If you can stomach eating protein powder flakes, you might as well eat Special K Zero instead.&lt;/p&gt;

&lt;p&gt;(I’ve only tried one flavor of Three Wishes, but it was a while ago and I don’t remember which flavor it was. I assume the other flavors have the same bad protein-y texture as the one I tried.)&lt;/p&gt;

&lt;h3 id=&quot;honorable-mention-kashi-go&quot;&gt;Honorable mention: Kashi Go&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Kashi-Go.jpg&quot; style=&quot;height:300px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Kashi Go, with 24% calories from protein, just barely does not qualify for my list. It has a good flavor and texture, but it makes my mouth feel weird if I eat too much of it. (I get the same mouth feeling when I eat a lot of spinach.) If it had that extra one percentage point of protein, I would put it at #4 on my list.&lt;/p&gt;

&lt;h3 id=&quot;unranked-because-its-not-vegan-magic-spoon-33-to-37-calories-from-protein-depending-on-flavor&quot;&gt;Unranked because it’s not vegan: &lt;strong&gt;Magic Spoon&lt;/strong&gt; (33% to 37% calories from protein, depending on flavor)&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Magic-Spoon.webp&quot; style=&quot;height:300px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I’ve never eaten Magic Spoon because it’s not vegan (it contains milk protein).&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=UTHKgVPZ-is&amp;amp;t=1135s&quot;&gt;Drew Gooden says it’s bad&lt;/a&gt; and he makes funny YouTube videos which means he must have good opinions about cereal. His description of eating Magic Spoon sounds a lot like my experience eating Special K Zero and Three Wishes—it tastes good for a few seconds, then it turns into a protein mush.&lt;/p&gt;

&lt;h3 id=&quot;some-other-non-vegan-cereals-that-i-know-nothing-about&quot;&gt;Some other non-vegan cereals that I know nothing about&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Snack House Keto Cereal: 44% calories from protein (uses milk protein)&lt;/li&gt;
  &lt;li&gt;Julian Bakery ProGranola Cereal: 44% calories from protein (uses egg white protein (weird choice but ok))&lt;/li&gt;
  &lt;li&gt;Perfect Keto Cereal: 36% calories from protein (uses milk protein) (appears to be discontinued)&lt;/li&gt;
  &lt;li&gt;Wonderworks Keto Friendly Breakfast Cereal: 35% calories from protein (uses milk and soy protein)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are all the high-protein breakfast cereals that I’m aware of.&lt;/p&gt;

&lt;h2 id=&quot;price-table&quot;&gt;Price table&lt;/h2&gt;

&lt;p&gt;This table gives the price of each cereal in terms of cents per gram of protein, ordered from cheapest to most expensive. I pulled these prices off Amazon; the prices in your area might differ.&lt;/p&gt;

&lt;p&gt;For the cost of protein oatmeal, I used Quaker 1-Minute Oats plus NOW Foods Soy Protein Isolate because those are the brands I buy.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Cereal&lt;/th&gt;
      &lt;th&gt;Price (¢/g)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;oatmeal + protein powder&lt;/td&gt;
      &lt;td&gt;3.7¢&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Post Premier Protein&lt;/td&gt;
      &lt;td&gt;4.2¢&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Special K High Protein&lt;/td&gt;
      &lt;td&gt;4.8¢&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Wheaties Protein&lt;/td&gt;
      &lt;td&gt;4.9¢&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Special K Zero&lt;/td&gt;
      &lt;td&gt;7.9¢&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Catalina Crunch&lt;/td&gt;
      &lt;td&gt;9.1¢&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Kashi Go&lt;/td&gt;
      &lt;td&gt;11.4¢&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Three Wishes&lt;/td&gt;
      &lt;td&gt;12.5¢&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Prices for non-vegan cereals:&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Cereal&lt;/th&gt;
      &lt;th&gt;Price (¢/g)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Julian Bakery ProGranola&lt;/td&gt;
      &lt;td&gt;10¢&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Wonderworks Keto Friendly&lt;/td&gt;
      &lt;td&gt;13.5¢&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Magic Spoon&lt;/td&gt;
      &lt;td&gt;13.8¢&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Snack House Keto&lt;/td&gt;
      &lt;td&gt;16.5¢&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This is strangely expensive for a big-brand cereal (Kashi is a subsidiary of Kellogg), my guess is there’s some sort of temporary supply issue and the price will go down. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;How convenient for my pro-vegan agenda that the non-vegan cereals are all so expensive!&lt;/p&gt;

      &lt;p&gt;Inconveniently for my agenda, the cheapest of the non-vegan cereals uses egg protein, which &lt;a href=&quot;https://foodimpacts.org/&quot;&gt;causes more animal suffering&lt;/a&gt; than whey protein.&lt;/p&gt;

      &lt;p&gt;Honestly I’m not that concerned about whey protein, it’s one of the least harmful animal products to buy. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Charity Cost-Effectiveness Really Does Follow a Power Law</title>
				<pubDate>Wed, 25 Dec 2024 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2024/12/25/charity_cost_effectiveness_power_law/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/12/25/charity_cost_effectiveness_power_law/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Conventional wisdom says charity cost-effectiveness obeys a power law. To my knowledge, this hypothesis has never been properly tested.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; So I tested it and it turns out to be true.&lt;/p&gt;

&lt;p&gt;(Maybe. Cost-effectiveness might also be &lt;a href=&quot;https://en.wikipedia.org/wiki/Log-normal_distribution&quot;&gt;log-normally&lt;/a&gt; distributed.)&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Cost-effectiveness estimates for global health interventions (from &lt;a href=&quot;https://www.dcp-3.org/chapter/2561/cost-effectiveness-analysis&quot;&gt;DCP3&lt;/a&gt;) fit a power law (a.k.a. &lt;a href=&quot;https://en.wikipedia.org/wiki/Pareto_distribution&quot;&gt;Pareto distribution&lt;/a&gt;) with \(\alpha = 1.11\). &lt;a href=&quot;#fitting-dcp3-data-to-a-power-law&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Simulations indicate that the true underlying distribution has a thinner tail than the empirically observed distribution. &lt;a href=&quot;#does-estimation-error-bias-the-result&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#fitting-dcp3-data-to-a-power-law&quot; id=&quot;markdown-toc-fitting-dcp3-data-to-a-power-law&quot;&gt;Fitting DCP3 data to a power law&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#does-estimation-error-bias-the-result&quot; id=&quot;markdown-toc-does-estimation-error-bias-the-result&quot;&gt;Does estimation error bias the result?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#future-work-i-would-like-to-see&quot; id=&quot;markdown-toc-future-work-i-would-like-to-see&quot;&gt;Future work I would like to see&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#source-code-and-data&quot; id=&quot;markdown-toc-source-code-and-data&quot;&gt;Source code and data&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;fitting-dcp3-data-to-a-power-law&quot;&gt;Fitting DCP3 data to a power law&lt;/h2&gt;

&lt;p&gt;The Disease Control Priorities 3 report (&lt;a href=&quot;https://www.dcp-3.org/chapter/2561/cost-effectiveness-analysis&quot;&gt;DCP3&lt;/a&gt;) provides cost-effectiveness estimates for 93 global health interventions (measured in &lt;a href=&quot;https://en.wikipedia.org/wiki/Disability-adjusted_life_year&quot;&gt;DALYs&lt;/a&gt; per US dollar). I took those 93 interventions and fitted them to a power law.&lt;/p&gt;

&lt;p&gt;You can see from this graph that the fitted power law matches the data reasonably well:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/dcp3-curve-fit.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;To be precise: the probability of a DCP3 intervention having cost-effectiveness \(x\) is well-approximated by the probability density function \(f(x) = \displaystyle\frac{1.11}{x^{2.11}}\), which is a power law (a.k.a. &lt;a href=&quot;https://en.wikipedia.org/wiki/Pareto_distribution&quot;&gt;Pareto distribution&lt;/a&gt;) with \(\alpha = 1.11\).&lt;/p&gt;

&lt;p&gt;It’s possible to statistically measure whether a curve fits the data using a &lt;a href=&quot;https://en.wikipedia.org/wiki/Goodness_of_fit&quot;&gt;goodness-of-fit test&lt;/a&gt;. There are a number of different goodness-of-fit tests; I used what’s known as the &lt;a href=&quot;https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test&quot;&gt;Kolmogorov-Smirnov test&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;. This test essentially looks at how far away the data points are from where the curve predicts them to be. If many points are far to one side of the curve or the other, that means the curve is a bad fit.&lt;/p&gt;

&lt;p&gt;I ran the Kolmogorov-Smirnov test on the DCP3 data, and it determined that &lt;strong&gt;a Pareto distribution fit the data well.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The goodness-of-fit test produced a p-value of 0.79 for the null hypothesis that the data follows a Pareto distribution. p = 0.79 means that, if you generated random data from a Pareto distribution, there’s a 79% chance that the random data would look &lt;em&gt;less&lt;/em&gt; like a Pareto distribution than the DCP3 data does. That’s good evidence that the DCP3 data is indeed Pareto-distributed or close to it.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;However, the data also fits well to a &lt;a href=&quot;https://en.wikipedia.org/wiki/Log-normal_distribution&quot;&gt;log-normal distribution&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Pareto and log-normal distributions look similar most of the time. They only noticeably differ in the far right tail—a Pareto distribution has a fatter tail than a log-normal distribution, and this becomes more pronounced the further out you look. But in real-world samples, we usually don’t see enough tail outcomes to distinguish between the two distributions.&lt;/p&gt;

&lt;p&gt;DCP3 only includes global health interventions. If we expanded the data to include other types of interventions, we might find a fatter tail, but I’m not aware of any databases that cover a more comprehensive set of cause areas.&lt;/p&gt;

&lt;p&gt;(The World Bank has data on &lt;a href=&quot;https://openknowledge.worldbank.org/handle/10986/34658&quot;&gt;education interventions&lt;/a&gt;, but adding one cause area at a time feels ad-hoc and it would create gaps in the distribution.)&lt;/p&gt;

&lt;h2 id=&quot;does-estimation-error-bias-the-result&quot;&gt;Does estimation error bias the result?&lt;/h2&gt;

&lt;p&gt;Yes—it causes you to underestimate the true value of \(\alpha\).&lt;/p&gt;

&lt;p&gt;(Recall that the alpha (\(\alpha\)) parameter determines the fatness of the tail—lower alpha means fatter tail. So estimate error makes the tail look fatter than it really is.)&lt;/p&gt;

&lt;p&gt;There’s a difference between cost-effectiveness and &lt;em&gt;estimated&lt;/em&gt; cost-effectiveness. Perhaps estimation error follows a power law, but the true underlying cost-effectiveness numbers &lt;em&gt;don’t&lt;/em&gt;. And even if they do, our cost-effectiveness estimates might produce a bias in the shape of the fitted distribution.&lt;/p&gt;

&lt;p&gt;I tested this by generating random Pareto-distributed&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; data to represent true cost-effectiveness, and then multiplying by a random noise variable to represent estimation error. I generated the noise as a log-normally-distributed random variable centered at 1 with \(\sigma = 0.5\)&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; (colloquially, that means you can expect the estimate to be off by 50%).&lt;/p&gt;

&lt;p&gt;I generated 10,000 random samples&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; at various values of alpha, applied some estimation error, and then fit the resulting estimates to a Pareto distribution. The results showed strong goodness of fit, but the estimated alphas did not match the true alphas:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;pre&gt;&lt;code&gt;true alpha 0.8  --&amp;gt;  0.73 estimated alpha (goodness-of-fit: p = 0.3)
true alpha 1.0  --&amp;gt;  0.89 estimated alpha (goodness-of-fit: p = 0.08)
true alpha 1.2  --&amp;gt;  1.07 estimated alpha (goodness-of-fit: p = 0.4)
true alpha 1.4  --&amp;gt;  1.22 estimated alpha (goodness-of-fit: p = 0.5)
true alpha 1.8  --&amp;gt;  1.54 estimated alpha (goodness-of-fit: p = 0.1)
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;To determine the variance of the bias, I generated 93 random samples at a true alpha of 1.1 (to match the DCP3 data) and fitted a Pareto curve to the samples. I repeated this process 10,000 times.&lt;/p&gt;

&lt;p&gt;Across all generations, the average estimated alpha was 1.06 with a standard deviation of 0.27. That’s a small bias—only 0.04—but it’s highly statistically significant (t-stat = –15, p = 0&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;A true alpha of 1.15 produces a mean estimate of 1.11, which equals the alpha of the DCP3 cost-effectiveness data. So if the DCP3 estimates have a 50% error (\(\sigma = 0.5\)), then the true alpha parameter is more like 1.15.&lt;/p&gt;

&lt;p&gt;Increasing the estimate error greatly increases the bias. When I changed the error (the \(\sigma\) parameter) from 50% to 100%, the bias became concerningly large, and it gets larger for higher values of alpha:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;pre&gt;&lt;code&gt;true alpha 0.8  --&amp;gt;  0.65 mean estimated alpha
true alpha 1.0  --&amp;gt;  0.78 mean estimated alpha
true alpha 1.2  --&amp;gt;  0.87 mean estimated alpha
true alpha 1.4  --&amp;gt;  0.96 mean estimated alpha
true alpha 1.6  --&amp;gt;  1.03 mean estimated alpha
true alpha 1.8  --&amp;gt;  1.11 mean estimated alpha
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the DCP3 samples have a 100% error then the true alpha is 1.8—much higher than the estimated value of 1.11.&lt;/p&gt;

&lt;p&gt;In addition, at 100% error with 10,000 samples, the estimates no longer fit a Pareto distribution well—the p-value of the goodness-of-fit test ranged from 0.005 to &amp;lt;0.00001 depending on the true alpha value.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Curiously, &lt;em&gt;decreasing&lt;/em&gt; the estimate error flipped the bias from negative to positive. When I reduced the simulation’s estimate error to 20%, a true alpha of 1.1 produced a mean estimated alpha of 1.14 (standard deviation 0.31, t-stat = 13, p = 0&lt;sup id=&quot;fnref:2:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;). A 20% error produced a positive bias across a range of alpha values—the estimated alpha was always a bit higher than the true alpha.&lt;/p&gt;

&lt;h2 id=&quot;future-work-i-would-like-to-see&quot;&gt;Future work I would like to see&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;A comprehensive DCP3-esque list of cost-effectiveness estimates for every conceivable intervention, not just global health. (That’s probably never going to happen but it would be nice.)&lt;/li&gt;
  &lt;li&gt;More data on the outer tail of cost-effectiveness estimates, to better identify whether the distribution looks more Pareto or more log-normal.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;source-code-and-data&quot;&gt;Source code and data&lt;/h2&gt;

&lt;p&gt;Source code is available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/intervention_power_laws.py&quot;&gt;on GitHub&lt;/a&gt;. Cost-effectiveness estimates are extracted from DCP3’s &lt;a href=&quot;https://www.dcp-3.org/sites/default/files/chapters/Annex%207A.%20Details%20of%20Interventions%20in%20Figs.pdf&quot;&gt;Annex 7A&lt;/a&gt;; I’ve reproduced the numbers &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/data/DCP3%20cost%20per%20DALY.txt&quot;&gt;here&lt;/a&gt; in a more convenient format.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The closest I could find was Stijn on the EA Forum, who &lt;a href=&quot;https://forum.effectivealtruism.org/posts/FXaCnPMiw3jWrnkho/cost-effectiveness-distributions-power-laws-and-scale&quot;&gt;plotted&lt;/a&gt; a subset of the Disease Control Priorities data on a log-log plot and fit the points to a power law distribution, but did not statistically test whether a power law represented the data well. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some details on goodness-of-fit tests:&lt;/p&gt;

      &lt;p&gt;Kolmogorov-Smirnov is the standard test, but it depends on the assumption that you know the true parameter values. If you estimate the parameters from the sample (as I did), then it can overestimate fit quality.&lt;/p&gt;

      &lt;p&gt;A recent paper by &lt;a href=&quot;http://soche.cl/chjs/volumes/09/01/Suarez-Espinosa_etal(2018).pdf&quot;&gt;Suarez-Espinoza et al. (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; devises a goodness-of-fit test for the Pareto distribution that does not depend on knowing parameter values. I implemented the test but did not find it to be more reliable than Kolmogorov-Smirnov—for example, it reported a very strong fit when I generated random data from a log-normal distribution. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;A high p-value is not always evidence in favor of the null hypothesis. It’s only evidence if you expect that, &lt;em&gt;if&lt;/em&gt; the null hypothesis is false, &lt;em&gt;then&lt;/em&gt; you will get a low p-value. But that’s true in this case.&lt;/p&gt;

      &lt;p&gt;(I’ve &lt;a href=&quot;https://mdickens.me/2024/09/26/outlive_a_critical_review/#people-with-metabolically-healthy-obesity-do-not-have-elevated-mortality-risk&quot;&gt;previously&lt;/a&gt; complained about how scientific papers often treat p &amp;gt; 0.05 as evidence in favor of the null hypothesis, even when you’d expect to see p &amp;gt; 0.05 &lt;em&gt;regardless&lt;/em&gt; of whether the null hypothesis was true or false—for example, if their study was &lt;a href=&quot;https://en.wikipedia.org/wiki/Power_(statistics)&quot;&gt;underpowered&lt;/a&gt;.)&lt;/p&gt;

      &lt;p&gt;If the data did not fit a Pareto distribution then we’d expect to see a much smaller p-value. For example, a goodness-of-fit test for a normal distribution gives p &amp;lt; 0.000001, and a gamma distribution gives p = 0.08. A log-normal distribution gives p = 0.96, so we can’t tell whether the data is Pareto or log-normal, but it’s unlikely to be normal or gamma. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Actually I used a &lt;a href=&quot;https://en.wikipedia.org/wiki/Lomax_distribution&quot;&gt;Lomax distribution&lt;/a&gt;, which is the same as a Pareto distribution except that the lowest possible value is 0 instead of 1. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The \(\sigma\) parameter is the standard deviation of the logarithm of the random variable. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In practice we will never have 10,000 distinct cost-effectiveness estimates. But when testing goodness-of-fit, it’s useful to generate many samples because a large data set is hard to overfit. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;As in, the p-value is so small that my computer rounds it off to zero. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:2:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Perhaps that’s evidence that the DCP3 estimates have less than a 100% error, since they do fit a Pareto distribution well? That would be convenient if true.&lt;/p&gt;

      &lt;p&gt;But it’s easy to get a good fit if we reduce the sample size to 93. When I generated 93 samples with 100% error, I got a p-value greater than 0.5 most of the time. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Suárez-Espinosa, J., Villasenor-Alva, J. A., Hurtado-Jaramillo, A., &amp;amp; Pérez-Rodríguez, P. (2018). &lt;a href=&quot;http://soche.cl/chjs/volumes/09/01/Suarez-Espinosa_etal(2018).pdf&quot;&gt;A goodness of fit test for the Pareto distribution.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>"You Can't Calculate the Expected Utility of a Communist Revolution"</title>
				<pubDate>Fri, 06 Dec 2024 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2024/12/06/expected_utility_of_communist_revolution/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/12/06/expected_utility_of_communist_revolution/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Leftist critics of effective altruism like to say this. Well, it’s not true, and I proved it by calculating (an estimate of) the expected utility of a communist revolution. It wasn’t even hard—it took me less than an hour.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;I put my cost-effectiveness analysis on SquiggleHub: &lt;a href=&quot;https://squigglehub.org/models/mdickens/communist-revolution-ev&quot;&gt;https://squigglehub.org/models/mdickens/communist-revolution-ev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;According to my analysis, working to start a communist revolution is between 11x and 400x as good as donating to &lt;a href=&quot;https://www.givewell.org/international/technical/programs/givedirectly-cash-for-poverty-relief-program&quot;&gt;GiveDirectly&lt;/a&gt; (that’s the 95% credence interval).&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; That makes it probably better than any &lt;a href=&quot;https://www.givewell.org/charities/top-charities&quot;&gt;GiveWell top charity&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That’s assuming communism is good, which it isn’t, but let’s say it is for the sake of argument.&lt;/p&gt;

&lt;p&gt;My model includes calculations with extensive commentary. The model is relatively simple—I recommend &lt;a href=&quot;https://squigglehub.org/models/mdickens/communist-revolution-ev&quot;&gt;reading it&lt;/a&gt; if you want to understand how it works.&lt;/p&gt;

&lt;p&gt;In short, the cost-effectiveness analysis goes like this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Assume life under communism will be 20% better than the status quo. This is probably an underestimate because it’s hard to capture the soul-crushing indignities of capitalism&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; but let’s say 20% to be conservative.&lt;/li&gt;
  &lt;li&gt;Look at the number of revolutionaries in historical communist revolutions to estimate the probability of success per revolutionary.&lt;/li&gt;
  &lt;li&gt;Downweight the probability of success based on the fact that the revolution might result in an inferior version of communism (like what happened with the Soviet Union).&lt;/li&gt;
  &lt;li&gt;Estimate how long it would take until a communist revolution happened anyway. The value of doing a revolution &lt;em&gt;now&lt;/em&gt; equals &lt;code&gt;[time between now and when it would have happened anyway] x [number of people who get to live under the new regime] x [how good communism is compared to the status quo]&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;The previous step gives us the total value of a revolution. Divide that by the probability of success per revolutionary to get the expected value of a single individual’s efforts.&lt;/li&gt;
  &lt;li&gt;Compare to how much an individual could donate to GiveDirectly if they worked a normal job instead of doing activism. That gives us the expected value of a communist revolutionary relative to a GiveDirectly donor.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Is this model perfectly accurate? No. I spent half an hour on it. I can immediately think of several ways it could be better.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; But the model is still &lt;em&gt;informative&lt;/em&gt;—it provides a basic case that working for a communist revolution could be more cost-effective than GiveWell top charities. And it shows that, contrary to what many people say, it’s possible in principle to calculate the expected utility of a communist revolution.&lt;/p&gt;

&lt;p&gt;The numbers in my model are mostly made up. But &lt;a href=&quot;https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/&quot;&gt;if it’s worth doing, it’s worth doing with made-up statistics.&lt;/a&gt;&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Using GiveWell’s 2023 numbers. The 2024 numbers are considerably more optimistic about GiveDirectly, which makes the multiplier smaller, but a communist revolution still compares favorably to GiveWell top charities. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I don’t actually believe this; this is me wearing my leftism hat. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some obvious improvements:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Look at a variety of historical revolutions instead of just one.&lt;/li&gt;
        &lt;li&gt;Account for the skill of the activist in question. Presumably, some people are more skilled than others at garnering support.&lt;/li&gt;
        &lt;li&gt;Instead of treating all activists as equally responsible, estimate the probability that a marginal activist causes a revolution to occur. Or estimate how much sooner a revolution occurs thanks to a marginal activist.&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Thoughts on My Donation Process</title>
				<pubDate>Wed, 04 Dec 2024 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2024/12/04/thoughts_on_my_donation_process/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/12/04/thoughts_on_my_donation_process/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;I have some observations and half-baked ideas about my recent &lt;a href=&quot;https://mdickens.me/2024/11/18/where_i_am_donating_in_2024/&quot;&gt;donation process&lt;/a&gt;. They weren’t important enough to include in the main post, but I want to talk about them anyway.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;on-deference&quot;&gt;On deference&lt;/h2&gt;

&lt;p&gt;Usually, I defer to the beliefs of other people who have spent more time on an issue than me, or who plausibly have more expertise, or who I just expect to have reasonable beliefs.&lt;/p&gt;

&lt;p&gt;While writing my donations post, I made a conscious effort to defer less than usual. Deference might maximize the probability that I make the correct decision, but deference reduces the total amount of reasoning that’s happening, which is bad for the group as a whole. I want there to be more reasoning happening.&lt;/p&gt;

&lt;p&gt;This is most relevant in my discussion of a few orgs that I disliked, which are also very popular among big EA funders. I’m 99% confident that the big funders have private information about those orgs, so maybe I should defer to them. But I’m also maybe 75% confident that if I had access to that information, it wouldn’t materially change my mind. I did anticipate that the private evidence would make me like the orgs a little better, so I updated based on this anticipation and evaluated the orgs a little more favorably than I would have otherwise.&lt;/p&gt;

&lt;h2 id=&quot;on-criticizing-orgs&quot;&gt;On criticizing orgs&lt;/h2&gt;

&lt;p&gt;I am not as nice as I’d like to be. I have a habit of accidentally saying mean things that hurt people’s feelings.&lt;/p&gt;

&lt;p&gt;On the other hand, I think most people are &lt;em&gt;too&lt;/em&gt; nice: they hurt others long-term by refusing to give them useful information that’s difficult to hear.&lt;/p&gt;

&lt;p&gt;(In theory, it’s possible to never say unnecessarily mean things, and always say necessary things, but only if you have perfect communication skills. In practice, there’s a tradeoff.)&lt;/p&gt;

&lt;p&gt;I think it’s a good norm that, if you’re investigating an org and it opens up to you, you shouldn’t take what you learn and use it against the org. I probably wouldn’t criticize an org based on private information that it gave me. I did criticize some orgs, but all my criticisms were based on public information.&lt;/p&gt;

&lt;p&gt;I think if most people wrote a donation post like mine, they’d self-censor in the interest of niceness and end up leaving out important information. I tried to avoid that, and erred more on the side of being mean (not pointlessly mean, but mean-and-truthful. Or maybe I should say mean-and-accurately-conveying-my-beliefs since I can’t promise that the things I said were true).&lt;/p&gt;

&lt;p&gt;As with my choice on deference, this was perhaps the wrong choice at an individual level but the right choice at the group level.&lt;/p&gt;

&lt;p&gt;I did focus on criticizing organizations and avoided saying negative things about specific people whenever possible.&lt;/p&gt;

&lt;h2 id=&quot;donation-sizing&quot;&gt;Donation sizing&lt;/h2&gt;

&lt;p&gt;I have a donor-advised fund (DAF) that I contributed to when I was earning to give. How much of my DAF money should I donate this year? What’s a reasonable spend-down rate?&lt;/p&gt;

&lt;p&gt;I’ve put a lot of thought into &lt;a href=&quot;https://mdickens.me/2020/07/03/estimating_discount_rate/&quot;&gt;how quickly to spend philanthropic resources&lt;/a&gt;, including &lt;a href=&quot;https://mdickens.me/2021/08/02/ai_timelines_now_vs_later/&quot;&gt;how AI timelines affect the answer&lt;/a&gt;. Unfortunately, all that thinking didn’t much help me answer the question.&lt;/p&gt;

&lt;p&gt;Plus, there are some complications:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I have some personal savings, which I could choose to donate. Should I count them as part of my donation money?&lt;/li&gt;
  &lt;li&gt;I might earn significant income in the future. Right now it looks like I won’t, but I might do more earning to give at some point, or I might take a direct-work job that happens to pay well. If I expect to earn more in the future, then I should spend more of my DAF now.&lt;/li&gt;
  &lt;li&gt;I didn’t donate much money for the last few years. Should I do catch-up donations this year? Or maybe spread out my catch-up donations over the next few years?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I didn’t come up with good answers to any of these questions. Ultimately I chose how much to donate based on what felt reasonable.&lt;/p&gt;

&lt;h2 id=&quot;diversifying-donations-as-a-trade&quot;&gt;Diversifying donations as a trade&lt;/h2&gt;

&lt;p&gt;I had an idea based on &lt;a href=&quot;https://forum.effectivealtruism.org/posts/zuqpqqFoue5LyutTv/the-ea-community-does-not-own-its-donors-money?commentId=HquctY3LJBt42fssh&quot;&gt;this comment&lt;/a&gt; by Oliver Habryka. He describes a trade between members of the EA community where some people do object-level work (relinquishing a high-paying job) and others earn money. He argues that when this trade occurs, the people doing object-level work should have some ownership over the funds that earners-to-give have earned.&lt;/p&gt;

&lt;p&gt;I spent a while earning to give. So arguably I should donate money to people who started out in a similar position as me but went into direct work instead. Essentially, I should (&lt;a href=&quot;https://www.lesswrong.com/tag/acausal-trade&quot;&gt;acausally&lt;/a&gt;) trade with altruists who could’ve earned a lot of money but didn’t. And because there are many such people, arguably I should split my donations across many of them instead of only donating to my #1 favorite thing.&lt;/p&gt;

&lt;p&gt;But this argument raises some questions. Who exactly was in a “similar position as me”? What about people who aren’t members of the EA community, but who are nonetheless doing similarly valuable work? What about people who didn’t have the necessary skill set to earn a lot of money, so they never made a choice not to?&lt;/p&gt;

&lt;p&gt;I decided not to further pursue this line of reasoning because I couldn’t figure out how to make sense of it. I just did the obvious thing of donating to the org(s) that looked most cost-effective on the margin.&lt;/p&gt;

&lt;h2 id=&quot;cooperating-with-the-survival-and-flourishing-fund&quot;&gt;Cooperating with the Survival and Flourishing Fund&lt;/h2&gt;

&lt;p&gt;Should I donate less money to orgs that have received grants from the &lt;a href=&quot;https://survivalandflourishing.fund/&quot;&gt;Survival and Flourishing Fund&lt;/a&gt; (SFF)?&lt;/p&gt;

&lt;p&gt;I want to be cooperative with SFF. If I donate less to an org that’s received SFF funding, that seems uncooperative.&lt;/p&gt;

&lt;p&gt;SFF has the &lt;a href=&quot;https://survivalandflourishing.fund/s-process&quot;&gt;S-process&lt;/a&gt;, which is a fancy method for allocating donations from a group of value-aligned donors who each want to be the donor of last resort, but who also want to make sure their favored orgs get funded. I could cooperate with SFF by participating in this process.&lt;/p&gt;

&lt;p&gt;I asked them if they wanted to add my money to the S-process and they declined, so I consider myself to have officially Cooperated and now I’m allowed to donate less to orgs that received SFF funding. I don’t think SFF really cares if its donations trade off against mine because I have much less money than it does.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Where I Am Donating in 2024</title>
				<pubDate>Mon, 18 Nov 2024 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2024/11/18/where_i_am_donating_in_2024/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/11/18/where_i_am_donating_in_2024/</guid>
                <description>
                  
                  
                  
                  &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Last updated 2025-04-25.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It’s &lt;a href=&quot;https://mdickens.me/2016/10/31/where_i_am_donating_in_2016/&quot;&gt;been a while&lt;/a&gt; since I last put serious thought into where to donate. Well I’m putting thought into it this year and I’m changing my mind on some things.&lt;/p&gt;

&lt;p&gt;I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I’ve managed to reason myself out of those emotions.&lt;/p&gt;

&lt;p&gt;Within x-risk:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;AI is the most important source of risk.&lt;/li&gt;
  &lt;li&gt;There is a disturbingly high probability that alignment research won’t solve alignment by the time superintelligent AI arrives. Policy work seems more promising.&lt;/li&gt;
  &lt;li&gt;Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the rest of this post, I will explain:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Why I prioritize x-risk over &lt;a href=&quot;#s-risk-research-and-animal-focused-longtermism&quot;&gt;animal-focused longtermist work&lt;/a&gt; and &lt;a href=&quot;#x-risk-vs-global-priorities-research&quot;&gt;global priorities research&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Why I prioritize AI policy over &lt;a href=&quot;#ai-safety-technical-research-vs-policy&quot;&gt;AI alignment research&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#man-versus-man-conflicts-within-ai-policy&quot;&gt;My beliefs&lt;/a&gt; about what kinds of policy work are best.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then I provide a &lt;a href=&quot;#organizations&quot;&gt;list of organizations&lt;/a&gt; working on AI policy and my evaluation of each of them, and &lt;a href=&quot;#where-im-donating&quot;&gt;where&lt;/a&gt; I plan to donate.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/jAfhxWSzsw4pLypRt/where-i-am-donating-in-2024&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#i-dont-like-donating-to-x-risk&quot; id=&quot;markdown-toc-i-dont-like-donating-to-x-risk&quot;&gt;I don’t like donating to x-risk&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#cause-prioritization&quot; id=&quot;markdown-toc-cause-prioritization&quot;&gt;Cause prioritization&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#s-risk-research-and-animal-focused-longtermism&quot; id=&quot;markdown-toc-s-risk-research-and-animal-focused-longtermism&quot;&gt;S-risk research and animal-focused longtermism&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#x-risk-vs-global-priorities-research&quot; id=&quot;markdown-toc-x-risk-vs-global-priorities-research&quot;&gt;X-risk vs. global priorities research&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#prioritization-within-x-risk&quot; id=&quot;markdown-toc-prioritization-within-x-risk&quot;&gt;Prioritization within x-risk&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ai-safety-technical-research-vs-policy&quot; id=&quot;markdown-toc-ai-safety-technical-research-vs-policy&quot;&gt;AI safety technical research vs. policy&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#quantitative-model-on-research-vs-policy&quot; id=&quot;markdown-toc-quantitative-model-on-research-vs-policy&quot;&gt;Quantitative model on research vs. policy&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#man-versus-man-conflicts-within-ai-policy&quot; id=&quot;markdown-toc-man-versus-man-conflicts-within-ai-policy&quot;&gt;“Man versus man” conflicts within AI policy&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#parallel-safetycapabilities-vs-slowing-ai&quot; id=&quot;markdown-toc-parallel-safetycapabilities-vs-slowing-ai&quot;&gt;Parallel safety/capabilities vs. slowing AI&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#if-we-dont-advance-capabilities-china-will-or-some-other-company-that-doesnt-care-about-safety-will&quot; id=&quot;markdown-toc-if-we-dont-advance-capabilities-china-will-or-some-other-company-that-doesnt-care-about-safety-will&quot;&gt;&lt;strong&gt;If we don’t advance capabilities, China will. Or some other company that doesn’t care about safety will.&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#ai-companies-need-to-build-state-of-the-art-sota-models-so-they-can-learn-how-to-align-those-models&quot; id=&quot;markdown-toc-ai-companies-need-to-build-state-of-the-art-sota-models-so-they-can-learn-how-to-align-those-models&quot;&gt;&lt;strong&gt;AI companies need to build state-of-the-art (SOTA) models so they can learn how to align those models.&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#we-need-to-develop-ai-as-soon-as-possible-because-it-will-greatly-improve-peoples-lives-and-were-losing-out-on-a-huge-opportunity-cost&quot; id=&quot;markdown-toc-we-need-to-develop-ai-as-soon-as-possible-because-it-will-greatly-improve-peoples-lives-and-were-losing-out-on-a-huge-opportunity-cost&quot;&gt;&lt;strong&gt;We need to develop AI as soon as possible because it will greatly improve people’s lives and we’re losing out on a huge opportunity cost.&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#we-should-advance-capabilities-to-avoid-a-hardware-overhang-a-situation-where-ai-can-be-improved-purely-by-throwing-more-hardware-at-it-which-is-potentially-dangerous-because-it-could-cause-ai-to-leap-forward-without-giving-people-time-to-prepare&quot; id=&quot;markdown-toc-we-should-advance-capabilities-to-avoid-a-hardware-overhang-a-situation-where-ai-can-be-improved-purely-by-throwing-more-hardware-at-it-which-is-potentially-dangerous-because-it-could-cause-ai-to-leap-forward-without-giving-people-time-to-prepare&quot;&gt;&lt;strong&gt;We should advance capabilities to avoid a “hardware overhang”&lt;/strong&gt;: a situation where AI can be improved purely by throwing more hardware at it, which is potentially dangerous because it could cause AI to leap forward without giving people time to prepare.&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#we-need-agi-to-prevent-some-other-existential-risk-from-killing-everyone&quot; id=&quot;markdown-toc-we-need-agi-to-prevent-some-other-existential-risk-from-killing-everyone&quot;&gt;&lt;strong&gt;We need AGI to prevent some other existential risk from killing everyone.&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#its-okay-to-advance-capabilities-because-ai-does-not-pose-an-existential-risk&quot; id=&quot;markdown-toc-its-okay-to-advance-capabilities-because-ai-does-not-pose-an-existential-risk&quot;&gt;&lt;strong&gt;It’s okay to advance capabilities because AI does not pose an existential risk.&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#freedom-vs-regulation&quot; id=&quot;markdown-toc-freedom-vs-regulation&quot;&gt;Freedom vs. regulation&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#regulations-to-slow-ai-would-require-the-government-to-take-authoritarian-measures&quot; id=&quot;markdown-toc-regulations-to-slow-ai-would-require-the-government-to-take-authoritarian-measures&quot;&gt;&lt;strong&gt;Regulations to slow AI would require the government to take authoritarian measures.&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#regulations-to-slow-ai-might-be-nearly-impossible-to-lift-even-if-ai-alignment-gets-solved-and-then-we-wont-get-the-glorious-transhumanist-future&quot; id=&quot;markdown-toc-regulations-to-slow-ai-might-be-nearly-impossible-to-lift-even-if-ai-alignment-gets-solved-and-then-we-wont-get-the-glorious-transhumanist-future&quot;&gt;&lt;strong&gt;Regulations to slow AI might be nearly impossible to lift even if AI alignment gets solved, and then we won’t get the glorious transhumanist future.&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#slow-nuanced-regulation-vs-fast-coarse-regulation&quot; id=&quot;markdown-toc-slow-nuanced-regulation-vs-fast-coarse-regulation&quot;&gt;Slow nuanced regulation vs. fast coarse regulation&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#working-with-vs-against-ai-companies&quot; id=&quot;markdown-toc-working-with-vs-against-ai-companies&quot;&gt;Working with vs. against AI companies&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#political-diplomacy-vs-advocacy&quot; id=&quot;markdown-toc-political-diplomacy-vs-advocacy&quot;&gt;Political diplomacy vs. advocacy&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conflicts-that-arent-man-vs-man-but-nonetheless-require-an-answer&quot; id=&quot;markdown-toc-conflicts-that-arent-man-vs-man-but-nonetheless-require-an-answer&quot;&gt;Conflicts that aren’t “man vs. man” but nonetheless require an answer&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#pause-vs-responsible-scaling-policy-rsp&quot; id=&quot;markdown-toc-pause-vs-responsible-scaling-policy-rsp&quot;&gt;Pause vs. Responsible Scaling Policy (RSP)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#policy-research-vs-policy-advocacy&quot; id=&quot;markdown-toc-policy-research-vs-policy-advocacy&quot;&gt;Policy research vs. policy advocacy&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#advocacy-directed-at-policy-makers-vs-the-general-public&quot; id=&quot;markdown-toc-advocacy-directed-at-policy-makers-vs-the-general-public&quot;&gt;Advocacy directed at policy-makers vs. the general public&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#organizations&quot; id=&quot;markdown-toc-organizations&quot;&gt;Organizations&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#important-disclaimers&quot; id=&quot;markdown-toc-important-disclaimers&quot;&gt;Important disclaimers&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ai-policy-institute&quot; id=&quot;markdown-toc-ai-policy-institute&quot;&gt;AI Policy Institute&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ai-safety-and-governance-fund&quot; id=&quot;markdown-toc-ai-safety-and-governance-fund&quot;&gt;AI Safety and Governance Fund&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ai-standards-lab&quot; id=&quot;markdown-toc-ai-standards-lab&quot;&gt;AI Standards Lab&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#campaign-for-ai-safety&quot; id=&quot;markdown-toc-campaign-for-ai-safety&quot;&gt;Campaign for AI Safety&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#centre-for-enabling-ea-learning-and-research-ceealar&quot; id=&quot;markdown-toc-centre-for-enabling-ea-learning-and-research-ceealar&quot;&gt;Centre for Enabling EA Learning and Research (CEEALAR)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#center-for-ai-policy&quot; id=&quot;markdown-toc-center-for-ai-policy&quot;&gt;Center for AI Policy&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#center-for-ai-safety&quot; id=&quot;markdown-toc-center-for-ai-safety&quot;&gt;Center for AI Safety&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#center-for-human-compatible-ai&quot; id=&quot;markdown-toc-center-for-human-compatible-ai&quot;&gt;Center for Human-Compatible AI&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#center-for-long-term-resilience&quot; id=&quot;markdown-toc-center-for-long-term-resilience&quot;&gt;Center for Long-Term Resilience&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#center-for-security-and-emerging-technology-cset&quot; id=&quot;markdown-toc-center-for-security-and-emerging-technology-cset&quot;&gt;Center for Security and Emerging Technology (CSET)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#centre-for-long-term-policy&quot; id=&quot;markdown-toc-centre-for-long-term-policy&quot;&gt;Centre for Long-Term Policy&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#centre-for-the-governance-of-ai&quot; id=&quot;markdown-toc-centre-for-the-governance-of-ai&quot;&gt;Centre for the Governance of AI&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#civai&quot; id=&quot;markdown-toc-civai&quot;&gt;CivAI&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#control-ai&quot; id=&quot;markdown-toc-control-ai&quot;&gt;Control AI&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#existential-risk-observatory&quot; id=&quot;markdown-toc-existential-risk-observatory&quot;&gt;Existential Risk Observatory&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#future-of-life-institute-fli&quot; id=&quot;markdown-toc-future-of-life-institute-fli&quot;&gt;Future of Life Institute (FLI)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#future-society&quot; id=&quot;markdown-toc-future-society&quot;&gt;Future Society&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#horizon-institute-for-public-service&quot; id=&quot;markdown-toc-horizon-institute-for-public-service&quot;&gt;Horizon Institute for Public Service&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#institute-for-ai-policy-and-strategy&quot; id=&quot;markdown-toc-institute-for-ai-policy-and-strategy&quot;&gt;Institute for AI Policy and Strategy&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#lightcone-infrastructure&quot; id=&quot;markdown-toc-lightcone-infrastructure&quot;&gt;Lightcone Infrastructure&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#machine-intelligence-research-institute-miri&quot; id=&quot;markdown-toc-machine-intelligence-research-institute-miri&quot;&gt;Machine Intelligence Research Institute (MIRI)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#manifund&quot; id=&quot;markdown-toc-manifund&quot;&gt;Manifund&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#model-evaluation-and-threat-research-metr&quot; id=&quot;markdown-toc-model-evaluation-and-threat-research-metr&quot;&gt;Model Evaluation and Threat Research (METR)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#palisade-research&quot; id=&quot;markdown-toc-palisade-research&quot;&gt;Palisade Research&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#pauseai-global&quot; id=&quot;markdown-toc-pauseai-global&quot;&gt;PauseAI Global&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#pauseai-us&quot; id=&quot;markdown-toc-pauseai-us&quot;&gt;PauseAI US&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#sentinel-rapid-emergency-response-team&quot; id=&quot;markdown-toc-sentinel-rapid-emergency-response-team&quot;&gt;Sentinel rapid emergency response team&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#simon-institute-for-longterm-governance&quot; id=&quot;markdown-toc-simon-institute-for-longterm-governance&quot;&gt;Simon Institute for Longterm Governance&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#stop-ai&quot; id=&quot;markdown-toc-stop-ai&quot;&gt;Stop AI&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#where-im-donating&quot; id=&quot;markdown-toc-where-im-donating&quot;&gt;Where I’m donating&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#prioritization-within-my-top-five&quot; id=&quot;markdown-toc-prioritization-within-my-top-five&quot;&gt;Prioritization within my top five&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#where-im-donating-this-is-the-section-in-which-i-actually-say-where-im-donating&quot; id=&quot;markdown-toc-where-im-donating-this-is-the-section-in-which-i-actually-say-where-im-donating&quot;&gt;Where I’m donating (this is the section in which I actually say where I’m donating)&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#changelog&quot; id=&quot;markdown-toc-changelog&quot;&gt;Changelog&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;i-dont-like-donating-to-x-risk&quot;&gt;I don’t like donating to x-risk&lt;/h1&gt;

&lt;p&gt;(This section is about my personal motivations. The arguments and logic start in the &lt;a href=&quot;#cause-prioritization&quot;&gt;next section&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;For more than a decade I’ve leaned toward longtermism and I’ve been concerned about existential risk, but I’ve never directly donated to x-risk reduction. I dislike x-risk on an emotional level for a few reasons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;In the present day, aggregate animal welfare matters far more than aggregate human welfare (credence: 90%). Present-day animal suffering is so extraordinarily vast that on some level it feels irresponsible to prioritize anything else, even though rationally I buy the arguments for longtermism.&lt;/li&gt;
  &lt;li&gt;Animal welfare is more neglected than x-risk (credence: 90%).&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;People who prioritize x-risk often disregard animal welfare (or the welfare of non-human beings, whatever shape those beings might take in the future). That makes me distrust their reasoning on cause prioritization. (This isn’t universally true—I know some people who care about animals but still prioritize x-risk.)&lt;/li&gt;
  &lt;li&gt;I find it distasteful the way people often talk about “human extinction”, which seemingly ignores the welfare of all other sentient beings.&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a while, I donated to animal-focused orgs that looked good under longtermism, like &lt;a href=&quot;https://www.sentienceinstitute.org/&quot;&gt;Sentience Institute&lt;/a&gt;. In recent years, I’ve avoided thinking about cause prioritization by supporting global priorities research (such as by donating to the &lt;a href=&quot;https://globalprioritiesinstitute.org/&quot;&gt;Global Priorities Institute&lt;/a&gt;)—pay them to think about cause prioritization so I don’t have to. I still believe there’s a good case for that sort of research, but the case for existential risk is stronger (more on this &lt;a href=&quot;#x-risk-vs-global-priorities-research&quot;&gt;below&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;I’ve spent too long ignoring my rationally-formed beliefs about x-risk because they felt emotionally wrong. I’m normally pretty good at biting bullets. I should bite this bullet, too.&lt;/p&gt;

&lt;p&gt;This decision to prioritize x-risk (mostly&lt;sup id=&quot;fnref:36&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;) didn’t happen because I changed my mind. It happened because I realized I was stupidly letting my emotional distaste toward x-risk sway my decision-making.&lt;/p&gt;

&lt;p&gt;On the other hand, I’ve become more worried about AI in the last few years. My P(doom) hasn’t really gone up, but the threat of misaligned AI has become more &lt;em&gt;visceral&lt;/em&gt;. I believe unaligned AI is my most likely cause of death, and I’d rather not die.&lt;sup id=&quot;fnref:54&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:54&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h1 id=&quot;cause-prioritization&quot;&gt;Cause prioritization&lt;/h1&gt;
&lt;h2 id=&quot;s-risk-research-and-animal-focused-longtermism&quot;&gt;S-risk research and animal-focused longtermism&lt;/h2&gt;

&lt;p&gt;I believe animal-focused (or non-human-focused) longtermist work is important (credence: 95%), and that it’s far more neglected than (human-focused) x-risk reduction (credence: 99%). I believe the same about &lt;a href=&quot;https://forum.effectivealtruism.org/posts/k6fJXBnc7YnDcxsQm/s-risks-fates-worse-than-extinction&quot;&gt;s-risk&lt;/a&gt; research (and s-risks heavily overlap with animal-focused longtermism, so that’s not a coincidence). But I also believe:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;At equivalent levels of funding, marginal work on x-risk is more cost-effective (credence: 75%) because non-human welfare is likely to &lt;a href=&quot;https://mdickens.me/2015/08/15/is_preventing_human_extinction_good/&quot;&gt;turn out okay&lt;/a&gt; if we develop friendly AI.&lt;/li&gt;
  &lt;li&gt;The cost-effectiveness of x-risk funding diminishes slowly enough that it’s better even at current funding levels (credence: 65%), especially because some of the most promising sub-fields within x-risk remain poorly funded.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Improving animal welfare likely has good flow-through effects into the distant future. But I think those flow-through effects don’t have a huge expected value compared to x-risk reduction because they only matter under certain conditions (I discussed these conditions a while back in &lt;a href=&quot;https://mdickens.me/2015/08/15/is_preventing_human_extinction_good/&quot;&gt;Is Preventing Human Extinction Good?&lt;/a&gt; and &lt;a href=&quot;https://mdickens.me/2015/09/10/on_values_spreading/&quot;&gt;On Values Spreading&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;This judgment is hard to make with confidence because it requires speculating about what the distant future will look like.&lt;/p&gt;

&lt;p&gt;In &lt;a href=&quot;https://forum.effectivealtruism.org/posts/EkKYqeAy3ArupKuYn/my-donations-2023-marcus-abramovitch&quot;&gt;Marcus Abramovitch’s excellent writeup&lt;/a&gt; on where he donated in 2023, he said,&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;I don’t think many x-risk organizations are fundamentally constrained on dollars and several organizations could be a lot more frugal and have approximately equal results.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I basically agree with this but I think there are some x-risk orgs that need more funding, and they’re among some of the most promising orgs.&lt;/p&gt;

&lt;h2 id=&quot;x-risk-vs-global-priorities-research&quot;&gt;X-risk vs. global priorities research&lt;/h2&gt;

&lt;p&gt;A dilemma:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;We can’t &lt;em&gt;fully&lt;/em&gt; align AI until we solve some foundational problems in ethics and other fields.&lt;/li&gt;
  &lt;li&gt;If we don’t align AI, we will go extinct before we solve those foundational problems.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(One proposed solution is to conduct a &lt;a href=&quot;https://forum.effectivealtruism.org/topics/long-reflection&quot;&gt;long reflection&lt;/a&gt;, and basically put a superintelligent AI on standby mode until that’s done. This proposal has some issues but I haven’t heard anything better.)&lt;/p&gt;

&lt;p&gt;So, should we focus on x-risk or global priorities research?&lt;/p&gt;

&lt;p&gt;Ultimately, I think x-risk is the higher priority (credence: 70%). If we build a (mostly) friendly AI without really figuring out some details of AI alignment, maybe we can work things out from there. But the &lt;a href=&quot;https://globalprioritiesinstitute.org/wp-content/uploads/gpi-research-agenda.pdf&quot;&gt;problems&lt;/a&gt; in global priorities research seem so complex that we have essentially no chance of solving them before AGI arrives (unless AGI turns out to take much longer than expected), regardless of how much funding goes to global priorities research.&lt;/p&gt;

&lt;h2 id=&quot;prioritization-within-x-risk&quot;&gt;Prioritization within x-risk&lt;/h2&gt;

&lt;p&gt;Among the various existential risks, AI risk stands out as clearly the most important. I believe this for essentially the same reasons that most people believe it.&lt;/p&gt;

&lt;p&gt;In short:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Natural risks are less concerning than human-caused risks (credence: 98%).&lt;/li&gt;
  &lt;li&gt;Climate change is not a serious existential threat (credence: 90%). See John Halstead’s report &lt;a href=&quot;https://johnhalstead.org/wp-content/uploads/2023/11/Climate-Change-Longtermism-1.pdf&quot;&gt;Climate Change &amp;amp; Longtermism&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Engineered pandemics are considerably less likely to cause extinction than AI (credence: 95%). I’ve heard biologists in the x-risk space claim that it would be very hard for a pandemic to cause total extinction.&lt;/li&gt;
  &lt;li&gt;Nuclear war is worrisome but less of an extinction risk than AI (credence: 85%). See 80,000 Hours’ &lt;a href=&quot;https://forum.effectivealtruism.org/posts/j8nyJ3pv5Q4rz4Bg2/new-80k-problem-profile-nuclear-weapons#How_likely_is_an_existential_catastrophe_resulting_from_nuclear_war_&quot;&gt;table of x-risk estimates&lt;/a&gt; for nuclear war.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more, see Michael Aird’s &lt;a href=&quot;https://forum.effectivealtruism.org/posts/JQQAQrunyGGhzE23a/database-of-existential-risk-estimates&quot;&gt;database of existential risk estimates&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;ai-safety-technical-research-vs-policy&quot;&gt;AI safety technical research vs. policy&lt;/h2&gt;

&lt;p&gt;There are a few high-level strategies for dealing with AI risk. We can broadly classify them into (1) technical research and (2) policy. Basically:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;technical research = figure out how to prevent AI from killing everyone&lt;/li&gt;
  &lt;li&gt;policy = increase the probability that policies/regulations will reduce x-risk&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(You could further divide policy into research vs. advocacy—i.e., figure out what makes for good regulations vs. advocate for regulations to be enacted. I’ll talk more about that later.)&lt;/p&gt;

&lt;p&gt;I don’t have any expertise in AI&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; and I don’t know what kinds of alignment research are most promising, but experts can’t seem to agree either—some think &lt;a href=&quot;https://www.alignmentforum.org/posts/YTq4X6inEudiHkHDF/prosaic-ai-alignment&quot;&gt;prosaic alignment&lt;/a&gt; will work, others think we need fundamentally new paradigms. (I lean toward the latter (credence: 70%).)&lt;/p&gt;

&lt;p&gt;But I don’t see how we are going to solve AI alignment. The best existing research seems like maybe it has some chance of someday leading to some method that could eventually solve alignment with enough work, perhaps.&lt;sup id=&quot;fnref:47&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:47&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; Our best hope is that either (1) AGI turns out to be much harder to develop than it looks or (2) solving alignment turns out to be really easy for some unforeseen reason.&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; But those hopes are non-actionable—I can’t increase their probability by donating money.&lt;/p&gt;

&lt;p&gt;Ever since I became concerned about AI risk (about a decade ago), I’ve weakly believed that we were not on pace to solve alignment before AGI arrived. But I thought perhaps technical research would become sufficiently popular as the dangers of AI became more apparent. By now, it’s clear that that isn’t happening, and we’re not going to solve AI alignment in time unless the problem turns out to be easy.&lt;/p&gt;

&lt;p&gt;I used to be even more pessimistic about AI policy than technical research,&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; but now I think it’s the more promising approach (credence: 80%). Surprisingly (to me), AI safety (as in notkilleveryoneism) is now kinda sorta mainstream, and there’s some degree of political will for creating regulations that could prevent AI from killing everyone. SB 1047, which might have meaningfully decreased x-risk, saw &lt;a href=&quot;https://theaipi.org/voters-support-sb1047-in-collaborative-poll/&quot;&gt;widespread support&lt;/a&gt;. (Unfortunately, one &lt;a href=&quot;https://x.com/KelseyTuoc/status/1838279944750927929&quot;&gt;particular guy&lt;/a&gt; with veto power did not support it.&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;Another consideration: bad policy work can backfire. I don’t know much about policy and I’m relatively bad at understanding people, so on priors, I don’t expect to be good at figuring out which policy efforts will work. I used to think I should defer to people with better social skills. But now I’ve seen some of the poor results produced by policy orgs that care a lot about reputation management, and I’ve seen how messaging about extinction is much more palatable than the people with good social skills predicted (e.g., as demonstrated by &lt;a href=&quot;https://theaipi.org/media/&quot;&gt;public opinion polling&lt;/a&gt;), so I think I overrated others’ judgment and underrated my own. As a consequence, I feel more confident that I can identify which policy orgs are doing good work.&lt;/p&gt;

&lt;p&gt;In summary:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I don’t think technical research is going to work.&lt;/li&gt;
  &lt;li&gt;Policy might work.&lt;/li&gt;
  &lt;li&gt;I think I’m qualified enough to evaluate policy orgs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I want to donate to something related to AI policy.&lt;/p&gt;

&lt;h3 id=&quot;quantitative-model-on-research-vs-policy&quot;&gt;Quantitative model on research vs. policy&lt;/h3&gt;

&lt;p&gt;I built a coarse &lt;a href=&quot;https://squigglehub.org/models/mdickens/ai-research-vs-policy&quot;&gt;quantitative model&lt;/a&gt; on the expected value of donations to technical research vs. policy. The model inputs are very rough but the model illustrates some important principles.&lt;/p&gt;

&lt;p&gt;(Disclaimer: First I decided to donate to AI policy, then I built the model, not the other way around.&lt;sup id=&quot;fnref:38&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:38&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; If the model had disagreed with my beliefs, then I would have changed the model. But if I couldn’t find a reasonable way to make the model fit my beliefs, then I would have changed my beliefs.)&lt;/p&gt;

&lt;p&gt;Preventing x-risk works like voting. All the expected value of your vote comes from the situation where the outcome is exactly tied and your vote breaks the tie. If the expected vote is close to 50/50, your vote has a high EV. If the expected vote count is far from 50/50, there’s an extremely small probability that your vote will matter.&lt;/p&gt;

&lt;p&gt;If I believed that it would cost (say) $10 billion to solve AI alignment, and also the total spending without my donation would be close to $10 billion, then my donation to alignment research has a high EV. But in fact I believe it probably costs much more than we’re going to spend&lt;sup id=&quot;fnref:63&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:63&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; (assuming no regulations).&lt;/p&gt;

&lt;p&gt;On a naive view, that means a donation to alignment research has extremely low EV. But that’s not correct because it doesn’t account for uncertainty. My median guess is that solving AI alignment will cost maybe $100 billion, and we will only actually spend $1 billion.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; If my credence intervals for those two numbers followed normal distributions, then the probability of making a difference would be incredibly small (like, “number of atoms in the solar system” small) because normal distributions have extremely low probability mass in the tails. But my beliefs have wide credence intervals, and they’re not normally distributed. So my distribution for cost-to-solve-alignment heavily overlaps with total-spending-on-alignment.&lt;/p&gt;

&lt;p&gt;It’s hard to have good intuitions for the probability that a donation makes a difference when that probability depends on the intersection of two overlapping fat-tailed distributions. That’s the sort of thing a quantitative model can help with, even if you don’t take the numbers too seriously.&lt;/p&gt;

&lt;p&gt;Ultimately, I think AI policy has higher EV than technical research. According to &lt;a href=&quot;https://squigglehub.org/models/mdickens/ai-research-vs-policy&quot;&gt;my made-up numbers&lt;/a&gt;, donating to policy is ~3x more cost-effective than donating to research. Somewhat more pessimistic inputs can change the ratio to &amp;gt;1000x.&lt;sup id=&quot;fnref:39&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:39&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; The ratio flips to &amp;lt;1x if you think there’s close to a 50/50 chance that we solve alignment without any government intervention.&lt;/p&gt;

&lt;h1 id=&quot;man-versus-man-conflicts-within-ai-policy&quot;&gt;“Man versus man” conflicts within AI policy&lt;/h1&gt;

&lt;p&gt;Some prioritization decisions are positive sum: people can work on technical research and policy at the same time. But others are zero sum. I’m wary of &lt;a href=&quot;https://slatestarcodex.com/2015/09/22/beware-systemic-change/&quot;&gt;“man versus man” conflict&lt;/a&gt;—of working in opposition to other (loosely) value-aligned people. But in policy, sometimes you have to engage in “man versus man” conflict. I want to think extra carefully before doing so.&lt;/p&gt;

&lt;p&gt;There are a few such conflicts within AI policy.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Strategy A&lt;/th&gt;
      &lt;th&gt;Strategy B&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Do capabilities and safety research in parallel. We need more advanced models to better understand what AGI will look like.&lt;/td&gt;
      &lt;td&gt;Slow down capabilities research to buy more time for safety research.&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Don’t push for regulations, they will excessively harm technological development.&lt;/td&gt;
      &lt;td&gt;Push for regulations because the risk is worth it.&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Take the time to figure out nuanced regulations that won’t impede the good parts of AI.&lt;/td&gt;
      &lt;td&gt;Push for regulations ASAP, even if they have worse side effects.&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Cooperate with big AI companies to persuade them to behave more safely.&lt;/td&gt;
      &lt;td&gt;Work against AI companies to stop their dangerous behaviors.&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Diplomatically develop political connections, and later use those to push for AI safety policies.&lt;/td&gt;
      &lt;td&gt;Loudly argue for AI safety policies now, even if it makes us look weird.&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;In every case, I like strategy B better. But how sure am I that I’m on the right side?&lt;/p&gt;

&lt;h2 id=&quot;parallel-safetycapabilities-vs-slowing-ai&quot;&gt;Parallel safety/capabilities vs. slowing AI&lt;/h2&gt;

&lt;p&gt;What are the arguments for advancing capabilities?&lt;/p&gt;

&lt;h4 id=&quot;if-we-dont-advance-capabilities-china-will-or-some-other-company-that-doesnt-care-about-safety-will&quot;&gt;&lt;strong&gt;If we don’t advance capabilities, China will. Or some other company that doesn’t care about safety will.&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;This is the strongest argument by my assessment, but I still think it’s wrong (credence: 90%).&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The companies that have done the most to accelerate AI development have all done so in the name of safety. If they didn’t believe advancing the frontier was the safest move, the world would be in a much safer position right now.&lt;/li&gt;
  &lt;li&gt;It’s likely that international treaties could prevent arms races—they worked well(ish) for nuclear weapons.&lt;/li&gt;
  &lt;li&gt;China is behind the US on AI. There is no need to race harder when you’re ahead.&lt;/li&gt;
  &lt;li&gt;By my amateurish judgment, China doesn’t seem as interested in an arms race as the US. You don’t need to race against someone who’s not racing.&lt;/li&gt;
  &lt;li&gt;How sure are you that it’s better for the United States to develop AI first? (China is less interested in controlling world politics than the US, and the Chinese government seems more concerned about AI risk than the US government.)&lt;/li&gt;
  &lt;li&gt;Who develops superintelligent AI first only matters in the narrow scenario where AI alignment is easy but also the AI can and will be used by its creators to take over the world:
    &lt;ul&gt;
      &lt;li&gt;If AI alignment is hard, it doesn’t matter who develops it first because everyone dies either way.&lt;/li&gt;
      &lt;li&gt;If the AI is &lt;em&gt;fully&lt;/em&gt; aligned, it will refuse to fulfill any unethical requests its creator makes (such as taking over the world).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;I don’t think we as a society have a good grasp on the game theory of arms races but I feel like the solution isn’t “push the arms race forward even faster”.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;ai-companies-need-to-build-state-of-the-art-sota-models-so-they-can-learn-how-to-align-those-models&quot;&gt;&lt;strong&gt;AI companies need to build state-of-the-art (SOTA) models so they can learn how to align those models.&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;I’ve heard people at Anthropic make this argument. But it’s disingenuous (or at least motivated)&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; because Anthropic is accelerating capabilities, not just matching the capabilities of pre-existing models—and they have a history of &lt;a href=&quot;https://www.lesswrong.com/posts/JbE7KynwshwkXPJAJ/anthropic-release-claude-3-claims-greater-than-gpt-4?commentId=hwWB4yJyEGhEWud8C&quot;&gt;almost-but-not-technically lying&lt;/a&gt; about whether they were going to advance capabilities).&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt; And the argument doesn’t really make sense because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;They could satisfy their stated goal nearly as well by training models that are (say) 3/4 as good as the state of the art (credence: 90%).&lt;/li&gt;
  &lt;li&gt;Or they could make deals with other companies to do safety research on their pre-existing SOTA models. This would satisfy their stated goal (credence: 98%). Companies might not be willing to cooperate like this, but surely it’s worth trying (and then &lt;a href=&quot;https://www.lesswrong.com/posts/fhEPnveFhb9tmd7Pe/use-the-try-harder-luke&quot;&gt;trying harder&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;There are many types of plausibly-productive alignment research that don’t require SOTA models (credence: 90%).&lt;sup id=&quot;fnref:41&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:41&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Having SOTA models doesn’t differentially improve alignment—it teaches you just as much about how to improve capabilities (credence: 60%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the bolded argument is correct, then AI companies should:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Temporarily stop AI development.&lt;/li&gt;
  &lt;li&gt;Learn everything they possibly can about AI alignment with the current model.&lt;/li&gt;
  &lt;li&gt;Publish a report on how they would use a more capable AI to improve alignment.&lt;/li&gt;
  &lt;li&gt;Get review from third-party alignment researchers.&lt;/li&gt;
  &lt;li&gt;If reviewers have a strong consensus that the report is reasonable, only then resume AI development.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is the sort of behavior you would see from a company that takes existential risk appropriately seriously.&lt;/p&gt;

&lt;h4 id=&quot;we-need-to-develop-ai-as-soon-as-possible-because-it-will-greatly-improve-peoples-lives-and-were-losing-out-on-a-huge-opportunity-cost&quot;&gt;&lt;strong&gt;We need to develop AI as soon as possible because it will greatly improve people’s lives and we’re losing out on a huge opportunity cost.&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;This argument only makes sense if you have a very low P(doom) (like &amp;lt;0.1%) or if you place minimal value on future generations. Otherwise, it’s not worth recklessly endangering the future of humanity to bring utopia a few years (or maybe decades) sooner. The math on this is really simple—bringing AI sooner only benefits the current generation, but extinction harms all future generations. You don’t need to be a strong longtermist, you just need to accord significant value to people who aren’t born yet.&lt;/p&gt;

&lt;p&gt;I’ve heard a related argument that the size of the accessible lightcone is rapidly shrinking, so we need to build AI ASAP even if the risk is high. If you do the math, this argument doesn’t make any sense (credence: 95%). The value of the outer edge of the lightcone is extremely small compared to its total volume.&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;AI could be the best thing that’s ever happened. But it can also be the best thing that’s ever happened 10/20/100 years from now, and if delaying AI lets us greatly reduce existential risk, then it’s worth the delay.&lt;/p&gt;

&lt;h4 id=&quot;we-should-advance-capabilities-to-avoid-a-hardware-overhang-a-situation-where-ai-can-be-improved-purely-by-throwing-more-hardware-at-it-which-is-potentially-dangerous-because-it-could-cause-ai-to-leap-forward-without-giving-people-time-to-prepare&quot;&gt;&lt;strong&gt;We should advance capabilities to avoid a “hardware overhang”&lt;/strong&gt;: a situation where AI can be improved purely by throwing more hardware at it, which is potentially dangerous because it could cause AI to leap forward without giving people time to prepare.&lt;/h4&gt;

&lt;p&gt;Sam Altman has made this argument. But he’s disingenuous (credence: 95%) because he also wants to &lt;a href=&quot;https://www.cnbc.com/2024/02/09/openai-ceo-sam-altman-reportedly-seeking-trillions-of-dollars-for-ai-chip-project.html&quot;&gt;fund hardware advances&lt;/a&gt;, which will increase hardware overhang.&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt; And:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;This argument implies that AI companies should stop looking for algorithmic improvements (credence: 90%) because they don’t produce overhang.&lt;/li&gt;
  &lt;li&gt;Pausing AI development would reduce demand for AI chips, slowing down hardware development.&lt;/li&gt;
  &lt;li&gt;Eliminating overhang only helps if we can meaningfully advance alignment using the higher level of capabilities. That seems unlikely to be worth the tradeoff because alignment has historically progressed very slowly. We are not on pace to solving alignment, with or without an overhang.
    &lt;ul&gt;
      &lt;li&gt;If we will be able to align a bigger model, shouldn’t it be even easier to align the models we currently have? But we don’t know how to align the models we have (beyond the superficial pseudo-alignment that RLHF produces).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;An AI Impacts &lt;a href=&quot;https://blog.aiimpacts.org/p/are-there-examples-of-overhang-for&quot;&gt;report&lt;/a&gt; described some examples of overhang in other industries. “None of them match the behavior that people seem to expect will happen with hardware overhang.”&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;we-need-agi-to-prevent-some-other-existential-risk-from-killing-everyone&quot;&gt;&lt;strong&gt;We need AGI to prevent some other existential risk from killing everyone.&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;Nearly every respectable person&lt;sup id=&quot;fnref:75&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:75&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt; who has estimated x-risk probabilities &lt;a href=&quot;https://forum.effectivealtruism.org/posts/JQQAQrunyGGhzE23a/database-of-existential-risk-estimates&quot;&gt;agrees&lt;/a&gt; that AI is by far the largest x-risk in the next century.&lt;/p&gt;

&lt;h4 id=&quot;its-okay-to-advance-capabilities-because-ai-does-not-pose-an-existential-risk&quot;&gt;&lt;strong&gt;It’s okay to advance capabilities because AI does not pose an existential risk.&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;This is a popular argument, but presumably everyone reading this already disagrees, so I’m not going to attempt to rebut it.&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;The parallel safety/capabilities side of the argument seems weak to me (and relies on a lot of what looks like motivated reasoning), so I feel comfortable supporting the pause side (credence: 85%).&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;But there’s some common ground:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Both sides should agree that slowing down hardware is good or at least neutral (credence: 75%).&lt;sup id=&quot;fnref:42&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:42&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt; This alleviates every concern except for the one about the opportunity costs of delaying development.&lt;/li&gt;
  &lt;li&gt;Both sides should support regulations and international treaties that restrict the speed of AI development (credence: 65%). International treaties alleviate concerns about arms races and about needing to stay on the cutting edge.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;freedom-vs-regulation&quot;&gt;Freedom vs. regulation&lt;/h2&gt;

&lt;p&gt;Arguments against regulation:&lt;/p&gt;

&lt;h4 id=&quot;regulations-to-slow-ai-would-require-the-government-to-take-authoritarian-measures&quot;&gt;&lt;strong&gt;Regulations to slow AI would require the government to take authoritarian measures.&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;This argument seems pretty wrong to me (credence: 95%). &lt;a href=&quot;https://mxschons.com/2024/comparing-ai-labs-and-pharmaceutical-companies/&quot;&gt;Other industries&lt;/a&gt; have much stricter regulations than AI without slipping into totalitarianism. If the regulations on AI GPUs were as strict as the ones on, say, pseudoephedrine, that would be sufficient to slow and monitor hardware development.&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Even if the regulations required individual people to turn in their GPUs to the government (and I don’t know why that would be required because GPU manufacturing is pretty centralized), there’s already precedent for that sort of thing in relatively free societies with e.g. the Australian government mandating that all citizens turn over their guns.&lt;/p&gt;

&lt;h4 id=&quot;regulations-to-slow-ai-might-be-nearly-impossible-to-lift-even-if-ai-alignment-gets-solved-and-then-we-wont-get-the-glorious-transhumanist-future&quot;&gt;&lt;strong&gt;Regulations to slow AI might be nearly impossible to lift even if AI alignment gets solved, and then we won’t get the glorious transhumanist future.&lt;/strong&gt;&lt;/h4&gt;

&lt;p&gt;I do think this is a real concern. Ultimately I believe it’s worth the tradeoff. And it does seem unlikely that excessive regulations could stay in place &lt;em&gt;forever&lt;/em&gt;—I doubt we’d have the knowledge to develop friendly AI, but not the regulatory freedom, for (say) a thousand years.&lt;/p&gt;

&lt;p&gt;(The United States essentially stopped developing nuclear power in the 1990s due to onerous regulations, but it just &lt;a href=&quot;https://apnews.com/article/georgia-power-nuclear-reactor-vogtle-9555e3f9169f2d58161056feaa81a425&quot;&gt;opened&lt;/a&gt; a new plant last year.)&lt;/p&gt;

&lt;h2 id=&quot;slow-nuanced-regulation-vs-fast-coarse-regulation&quot;&gt;Slow nuanced regulation vs. fast coarse regulation&lt;/h2&gt;

&lt;p&gt;Some argue that we should advocate for regulation, but push nuanced messaging to make sure we don’t hamstring economic development.&lt;/p&gt;

&lt;p&gt;This disagreement largely comes down to P(doom) and AI timelines:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If P(doom) is low, it’s worth accepting some extra risk to figure out how to write careful regulations.&lt;/li&gt;
  &lt;li&gt;If timelines are long, we have plenty of time to figure out regulations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If delaying regulations increases P(doom) by a lowish number like one percentage point, I don’t think it’s worth it—economically stifling regulations are not 1% as bad as extinction.&lt;/p&gt;

&lt;p&gt;I think it’s unlikely that transformative AI comes in the next five years. But it’s not unthinkable. Metaculus (&lt;a href=&quot;https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;https://www.metaculus.com/questions/19356/transformative-ai-date/&quot;&gt;2&lt;/a&gt;, &lt;a href=&quot;https://www.metaculus.com/questions/384/humanmachine-intelligence-parity-by-2040/&quot;&gt;3&lt;/a&gt;) and &lt;a href=&quot;https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai&quot;&gt;surveyed experts&lt;/a&gt; don’t think it’s unthinkable either. And there could be a long delay between &lt;em&gt;pushing for&lt;/em&gt; regulations and those regulations &lt;em&gt;being implemented&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Dario Amodei, CEO of Anthropic, &lt;a href=&quot;https://x.com/ai_ctrl/status/1806748781440061870&quot;&gt;believes&lt;/a&gt; human-level AI could arrive within 1–2 years. That’s not enough time to figure out nuanced regulations. If he’s right, we need regulations &lt;em&gt;right now.&lt;/em&gt; (Actually, we need regulations ten years ago, but the second best time is now.)&lt;/p&gt;

&lt;p&gt;Given that forecasts put a reasonable probability on transformative AI arriving very soon, I don’t see how it makes sense to delay regulations any more than we already have.&lt;/p&gt;

&lt;p&gt;(I believe a lot of people get this wrong because they’re not thinking probabilistically. Someone has (say) a 10% P(doom) and a 10% chance of AGI within five years, and they round that off to “it’s not going to happen so we don’t need to worry yet.” A 10% chance is still really really bad.)&lt;/p&gt;

&lt;p&gt;And I’m not sure it’s feasible to write the sort of nuanced regulations that some people want. The ideal case is writing regulations that enable beneficial AI while preventing dangerous AI, but at the limiting case, that amounts to “legalize aligned AI and ban unaligned AI”. The less we know about AI alignment, the less likely the regulations are to do the right thing. And we know so little that I’m not sure there’s any real advantage to adding nuance to regulations.&lt;/p&gt;

&lt;p&gt;(&lt;a href=&quot;https://en.wikipedia.org/wiki/Safe_and_Secure_Innovation_for_Frontier_Artificial_Intelligence_Models_Act&quot;&gt;SB 1047&lt;/a&gt; serves as a baseline: writing regulation with that level of nuance takes approximately zero marginal time because it’s already been done. Pushing to delay regulation only makes sense if you think we need something significantly more nuanced than SB 1047.)&lt;/p&gt;

&lt;h2 id=&quot;working-with-vs-against-ai-companies&quot;&gt;Working with vs. against AI companies&lt;/h2&gt;

&lt;p&gt;I believe it’s better (on the margin) to work against AI companies (credence: 80%). I am not aware of any strong arguments or evidence for one side or the other, but I have a few bits of weak evidence.&lt;/p&gt;

&lt;p&gt;(For someone with a larger budget, it might be worthwhile to commission an investigation into the track record of working with vs. against companies on this sort of thing.)&lt;/p&gt;

&lt;p&gt;There’s a moderately strong argument in favor of cooperating with AI companies on policy:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;If AI safety advocates make enemies with AI companies, those companies will get into a political fight with safety advocates, and companies are more powerful so they will probably win.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;How much stock you put in that argument depends on how well you think industry-friendly regulations will reduce x-risk. It seems to me that they won’t. Good regulations will cause AI companies to make less money. If you advocate for regulations that companies like, then the regulations won’t be good.&lt;sup id=&quot;fnref:52&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:52&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt; I don’t see a middle ground where regulations prevent AI from killing everyone but also don’t impede companies’ profits. (A superintelligent AI is always going to be more profitable than anything else, unless it kills everyone.)&lt;/p&gt;

&lt;p&gt;If we get into a political fight with AI companies, we might lose. But if we concede and let AI companies get the regulations they want, we &lt;em&gt;definitely&lt;/em&gt; lose.&lt;sup id=&quot;fnref:40&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:40&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, you could join an AI company or try to cooperatively influence it from the outside.&lt;/p&gt;

&lt;p&gt;The (anecdotal) track record of working with AI companies so far:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;People who worked for OpenAI were forced to sign non-disparagement agreements, preventing them from dissenting publicly.&lt;/li&gt;
  &lt;li&gt;OpenAI claimed it would dedicate 20% of its compute to alignment research. A year later, the heads of the alignment team complained that they never got their promised compute; OpenAI lied and said they meant something different than the thing they obviously meant; a lot of the alignment team quit; then OpenAI gave up on pretending not to have lied and disbanded its alignment team.&lt;/li&gt;
  &lt;li&gt;Geoffrey Hinton &lt;a href=&quot;https://www.spectator.co.uk/article/we-may-be-history-geoffrey-hinton-on-the-dangers-of-ai/&quot;&gt;quit Google&lt;/a&gt; because, among other reasons, he thought he was too constrained by “self-censorship.”&lt;/li&gt;
  &lt;li&gt;Altman attempted to fire board member Helen Toner for criticizing the unsafeness of OpenAI’s models.&lt;/li&gt;
  &lt;li&gt;Open Philanthropy donated $30 million to OpenAI to buy a board seat. This board seat was probably instrumental in getting Altman fired for (essentially) disregarding safety, but the firing didn’t stick and then all the safety-conscious board members got fired.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In each of these cases, working within the company made things definitively worse. The last instance came close to making things better but ultimately made things worse—it made OpenAI $30 million richer without making it safer.&lt;/p&gt;

&lt;p&gt;I know of at least one potential counterexample: OpenAI’s RLHF was developed by AI safety people who joined OpenAI to promote safety. But it’s not clear that RLHF helps with x-risk.&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Maybe OpenAI makes it uniquely hard to change the culture from within, and you’d fare better with other companies. I don’t think that’s true because two of the other big AI players, Meta and Google, are larger and have more inertia and therefore should be harder to change. Only Anthropic seems easier to influence.&lt;/p&gt;

&lt;p&gt;(But the founding members of Anthropic are concerned with x-risk and they’re still racing to build superintelligent AI as fast as possible while &lt;a href=&quot;https://x.com/ch402/status/1666482929772666880&quot;&gt;admitting&lt;/a&gt; that they have no idea how to make it safe. Influence doesn’t seem helpful: AI safety memes are in the positions of greatest possible influence—namely, the brains of the founders—but still aren’t making Anthropic safe.)&lt;/p&gt;

&lt;p&gt;At the risk of over-updating on random authors who I know nothing about: In 1968, James C. Thomson wrote an article called &lt;a href=&quot;https://archive.ph/6xVX2&quot;&gt;How Could Vietnam Happen? An Autopsy&lt;/a&gt; (h/t &lt;a href=&quot;https://www.lesswrong.com/posts/Jf3ECowLsygYYhEC2/jacobjacob-s-shortform-feed#Jw75qmwiKYyGSbXFn&quot;&gt;Jonas V&lt;/a&gt;). He wrote that, essentially, dissenting insiders don’t protest because they want to accumulate more influence first. They delay expressing their dissent, always wanting to increase the security of their position, and never get to a point where they actually use their position to do good. Former OpenAI employee Daniel Kokotajlo &lt;a href=&quot;https://www.lesswrong.com/posts/Jf3ECowLsygYYhEC2/jacobjacob-s-shortform-feed?commentId=jCgCgJn4wxR4DeGqR&quot;&gt;says&lt;/a&gt; he observed this happening at OpenAI.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://phdcomics.com/comics.php?f=1436&quot;&gt;PhD Comics&lt;/a&gt; observes the same phenomenon in academia:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://phdcomics.com/comics/archive/phd072011s.gif&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Hat tip to &lt;a href=&quot;https://www.betonit.ai/p/tenure-is-a-total-scam&quot;&gt;Bryan Caplan&lt;/a&gt;, who adds commentary:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;The classic story is that tenure protects dissenters. […]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;The flaw with the argument is that academic dissenters remain ultra-rare. Far too rare to justify the enormous downsides of the tenure system. And from a bird’s-eye view, the full effect of tenure on dissent is mixed at best. Remember: To get tenure, a dissenter normally has to spend a decade and a half impersonating a normal academic. If you start the process as a non-conformist, the system almost always either weeds you out or wins you over. By the time you get tenure, a creepy chorus of “One of us! One of us!” is in order.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Finally, “with” vs. “against” isn’t necessarily mutually exclusive. There are already many AI safety advocates working inside AI companies. Having a more &lt;a href=&quot;https://en.wikipedia.org/wiki/Radical_flank_effect&quot;&gt;radical flank&lt;/a&gt; on the outside could be a useful complementary strategy. (I’m not confident about this last argument.)&lt;/p&gt;

&lt;h2 id=&quot;political-diplomacy-vs-advocacy&quot;&gt;Political diplomacy vs. advocacy&lt;/h2&gt;

&lt;p&gt;I could say some of the same things about politics that I said about working from inside AI companies: empirically, people get caught in the trap of accumulating social capital and then never actually spend that capital.&lt;/p&gt;

&lt;p&gt;Relatedly, some people think you should talk about mundane risks of AI and avoid discussing extinction to not look weird. I have a strong prior toward honesty—telling people what you care about and what your true motivations are, rather than misrepresenting your beliefs (or lying by omission) to make them sound more palatable. And I have a moderate prior against accumulating power—both the good guys and the bad guys want to accumulate power. Honesty is an &lt;a href=&quot;https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/&quot;&gt;asymmetric weapon&lt;/a&gt; and power is symmetric.&lt;/p&gt;

&lt;h1 id=&quot;conflicts-that-arent-man-vs-man-but-nonetheless-require-an-answer&quot;&gt;Conflicts that aren’t “man vs. man” but nonetheless require an answer&lt;/h1&gt;

&lt;p&gt;There are some other areas of debate where funding one side doesn’t necessarily hurt the other side, but I have a finite amount of money and I need to decide which type of thing to fund.&lt;/p&gt;

&lt;h2 id=&quot;pause-vs-responsible-scaling-policy-rsp&quot;&gt;Pause vs. Responsible Scaling Policy (RSP)&lt;/h2&gt;

&lt;p&gt;Originally I wrote a long diatribe on why I don’t like RSPs and how badly-written AI companies’ RSPs are. But after spending some more time reading pro-RSP commentary, I realized my criticisms didn’t matter because RSP advocates don’t seem to like RSPs much either. The biggest advocates have said things like (paraphrased) “a full pause would be better, but it’s not feasible, so an RSP is a reasonable compromise.” If I understand correctly, they see an RSP as essentially a worse version of a pause but without the downsides. So the real disagreement is about how big the downsides of a pause are.&lt;/p&gt;

&lt;p&gt;As far as I can tell, the main cruxes are:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It’s not time to pause yet.&lt;/li&gt;
  &lt;li&gt;A pause is bad because it would create a hardware overhang.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;26&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;If the US government mandates a pause, China will keep developing AI.&lt;/li&gt;
  &lt;li&gt;A pause has negative EV because it delays the glorious transhumanist future.&lt;/li&gt;
  &lt;li&gt;It’s likely that we can trust companies to voluntarily implement good RSPs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I’ve already talked about these and why I believe they’re all incorrect. If you agree with my earlier arguments, then an unconditional pause makes more sense than an RSP.&lt;/p&gt;

&lt;p&gt;I would be more optimistic about a government-enforced RSP than a voluntary RSP, but I believe that’s not what people typically mean when they talk about RSPs.&lt;/p&gt;

&lt;h2 id=&quot;policy-research-vs-policy-advocacy&quot;&gt;Policy research vs. policy advocacy&lt;/h2&gt;

&lt;p&gt;Is it better to advocate for regulation/AI policy, or to do policy-relevant research?&lt;/p&gt;

&lt;p&gt;I don’t really know about this one. I get the sense that policy advocacy is more important, but I don’t have much of an argument as to why.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The difference between no regulation vs. mediocre regulation is bigger than mediocre vs. good regulation. (credence: 70%)&lt;/li&gt;
  &lt;li&gt;Policy advocacy is more neglected (although they’re both pretty neglected). (credence: 90%)&lt;/li&gt;
  &lt;li&gt;It doesn’t seem that hard to write legislation to slow AI development. How much more research do we really need?&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt; (credence: 50%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I made some related arguments in &lt;a href=&quot;#slow-nuanced-regulation-vs-fast-coarse-regulation&quot;&gt;slow nuanced regulation vs. fast coarse regulation&lt;/a&gt;. If nuanced regulation isn’t worth it, then policy research likely isn’t worth it because there’s not much research to do. (Although you might still do research on things like which advocacy strategies are likely to be most effective.)&lt;/p&gt;

&lt;h2 id=&quot;advocacy-directed-at-policy-makers-vs-the-general-public&quot;&gt;Advocacy directed at policy-makers vs. the general public&lt;/h2&gt;

&lt;p&gt;In favor of focusing on policy-makers:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;You get a bigger impact per person convinced, because policy-makers are the ones who actually enact regulations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In favor of focusing on the public:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A higher proportion of people will be receptive to your message. (And in fact, people are already broadly concerned about AI, so it might be less about convincing and more about motivating.)&lt;/li&gt;
  &lt;li&gt;Policy-makers’ activities are largely downstream of the public—they want to do what their constituents want.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I don’t have much of an opinion about which is better—I think it depends on the specifics of the organization that’s doing the advocacy. And both sorely need more funding.&lt;/p&gt;

&lt;h1 id=&quot;organizations&quot;&gt;Organizations&lt;/h1&gt;

&lt;p&gt;I’m not qualified to evaluate AI policy organizations&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;28&lt;/a&gt;&lt;/sup&gt; so I would like to delegate to an expert grantmaker. Unfortunately, none of the existing grantmakers work for me. Most focus on technical research.&lt;sup id=&quot;fnref:57&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:57&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;29&lt;/a&gt;&lt;/sup&gt; Only &lt;a href=&quot;https://www.founderspledge.com/recommendations/topic/artificial-intelligence&quot;&gt;Founders Pledge&lt;/a&gt; has up-to-date recommendations&lt;sup id=&quot;fnref:56&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:56&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;30&lt;/a&gt;&lt;/sup&gt; on AI policy, but I didn’t realize they existed until I had spent a lot of time looking into organizations on my own, and it turns out I have some significant disagreements with the Founders Pledge recs. (Three of the seven Founders Pledge recs are my three least favorite orgs among the ones I review below.)&lt;sup id=&quot;fnref:58&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:58&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;31&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;So I did it myself. I made a list of every org I could find that works on AI policy.&lt;sup id=&quot;fnref:29&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:29&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;32&lt;/a&gt;&lt;/sup&gt; Then I did shallow evaluations of each of them.&lt;/p&gt;

&lt;p&gt;Some preamble:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;As a rule of thumb, I don’t want to fund anything Open Philanthropy has funded. Not because it means they don’t have room for more funding, but because I believe (credence: 80%) that Open Philanthropy has bad judgment on AI policy (as explained in &lt;a href=&quot;https://www.lesswrong.com/posts/wn5jTrtKkhspshA4c/michaeldickens-s-shortform?commentId=zoBMvdMAwpjTEY4st&quot;&gt;this comment&lt;/a&gt; by Oliver Habryka and &lt;a href=&quot;https://www.lesswrong.com/posts/wn5jTrtKkhspshA4c/michaeldickens-s-shortform?commentId=xpNDD82qjFpyYnP3Q&quot;&gt;reply&lt;/a&gt; by Akash—I have similar beliefs, but they explain it better than I do). Open Philanthropy prefers to fund orgs that behave “respectably” and downplay x-risks, and does not want to fund any orgs that &lt;a href=&quot;#working-with-vs-against-ai-companies&quot;&gt;work against&lt;/a&gt; AI companies.&lt;sup id=&quot;fnref:49&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:49&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;33&lt;/a&gt;&lt;/sup&gt; I don’t want to fund any org that’s potentially making it more difficult to communicate to policy-makers about AI x-risk or helping AI companies accelerate capabilities.&lt;/li&gt;
  &lt;li&gt;In the interest of making my life easier, I stopped investigating an organization as soon as I found a reason not to donate to it, so some of these writeups are missing obvious information.&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;34&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;A lot of these orgs have similar names. I use full names for any orgs wherever the abbreviation is potentially ambiguous.&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;35&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;There’s an unfortunate dynamic where I won’t donate to an org if I can’t figure out what it’s doing. But if an org spends a lot of time writing about its activities, that’s time it could be spending on “real” work instead. I have no solution to this.&lt;sup id=&quot;fnref:28&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:28&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;36&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;important-disclaimers&quot;&gt;Important disclaimers&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;When describing orgs’ missions and activities, sometimes I quote or paraphrase from their materials without using quotation marks because the text gets messy otherwise. If I do quote without attribution, the source will be one of the links provided in that section.&lt;/li&gt;
  &lt;li&gt;I only spent 1–2 hours looking into each organization, so I could be substantially wrong in many cases.&lt;/li&gt;
  &lt;li&gt;It might have been good practice to share (parts of) this document with the reviewed organizations before publishing,&lt;sup id=&quot;fnref:61&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:61&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;37&lt;/a&gt;&lt;/sup&gt; but I didn’t do that, mainly because it would take a lot of additional work.&lt;sup id=&quot;fnref:43&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:43&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;38&lt;/a&gt;&lt;/sup&gt; The only exception is if I referenced a private comment made by an individual, I asked permission from that individual before publishing it.&lt;/li&gt;
  &lt;li&gt;Potential conflict of interest: I have friends at METR and Palisade.
    &lt;ul&gt;
      &lt;li&gt;However, I didn’t know I had a friend who worked at METR until after I had written the section on METR. I’m not good at keeping track of where my friends work.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;I’m acquainted with some of the people at CSET, Lightcone, PauseAI, and Sentinel. I might have friends or acquaintances at other orgs as well—like I mentioned, I’m not good at knowing where people work.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;ai-policy-institute&quot;&gt;AI Policy Institute&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://theaipi.org&quot;&gt;AI Policy Institute&lt;/a&gt; (mostly) runs &lt;a href=&quot;https://theaipi.org/media/&quot;&gt;public opinion polls&lt;/a&gt; on AI risks, some of which are relevant to x-risk. The polls cover some important issues and provide useful information to motivate policy-makers. Some examples:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;2.5x more voters support SB 1047 than oppose it. (&lt;a href=&quot;https://theaipi.org/voters-support-sb1047-in-collaborative-poll/&quot;&gt;source&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;56% of voters agreed it would be a good thing if AI progress was significantly slowed, vs. 27% disagreed. (&lt;a href=&quot;https://theaipi.org/poll-biden-ai-executive-order-10-30-8/&quot;&gt;source&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;Voters’ top priority on AI regulation is preventing catastrophic outcomes. (&lt;a href=&quot;https://theaipi.org/poll-biden-ai-executive-order-10-30-2/&quot;&gt;source&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This sort of work seems good. I’m not sure how big an impact it has on the margin. My intuition is that polls are good, but additional polls have rapidly diminishing returns, so I wouldn’t consider AI Policy Institute a top donation candidate.&lt;/p&gt;

&lt;p&gt;I could not find good information about its room for more funding. It did not respond to my inquiry on its funding situation.&lt;/p&gt;

&lt;h2 id=&quot;ai-safety-and-governance-fund&quot;&gt;AI Safety and Governance Fund&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://aisgf.us/&quot;&gt;AI Safety and Governance Fund&lt;/a&gt; (which, to my knowledge, is a one-man org run by Mikhail Samin) wants to test and spread messages to reduce AI x-risk—see &lt;a href=&quot;https://manifund.org/projects/testing-and-spreading-messages-to-reduce-ai-x-risk&quot;&gt;Manifund proposal&lt;/a&gt;. It plans to buy ads to test what sorts of messaging are most effective at communicating the arguments for why AI x-risk matters.&lt;/p&gt;

&lt;p&gt;I like this project because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Pushing for x-risk-relevant regulation is the most promising sort of intervention right now. But we don’t have much data on what sorts of messaging are most effective. This project intends to give us that data.&lt;/li&gt;
  &lt;li&gt;Mikhail Samin, who runs the org, has a good track record of work on AI safety projects (from what I can see).&lt;/li&gt;
  &lt;li&gt;Mikhail has reasonable plans for what to do with this information once he gets it. (He shared his plans with me privately and asked me not to publish them.)&lt;/li&gt;
  &lt;li&gt;The project has room for more funding, but it shouldn’t take much money to accomplish its goal.&lt;/li&gt;
  &lt;li&gt;The project received a speculation grant from the Survival and Flourishing Fund (SFF) and is reasonably likely to get more funding, but (1) it might not; (2) even if it does, I think it’s useful to diversify the funding base; (3) I generally like SFF grants and I don’t mind funging&lt;sup id=&quot;fnref:73&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:73&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;39&lt;/a&gt;&lt;/sup&gt; SFF dollars.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;ai-standards-lab&quot;&gt;AI Standards Lab&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.aistandardslab.org/&quot;&gt;AI Standards Lab&lt;/a&gt; aims to accelerate the writing of AI safety standards (by standards bodies like ISO and NIST) by writing standards that orgs can adapt.&lt;/p&gt;

&lt;p&gt;These standards are rarely directly relevant to x-risk. Improving standards on sub-existential risks may make it easier to regulate x-risks, but I would rather see an org work on x-risk more directly.&lt;/p&gt;

&lt;p&gt;AI Standards Lab does not appear to be seeking donations.&lt;/p&gt;

&lt;h2 id=&quot;campaign-for-ai-safety&quot;&gt;Campaign for AI Safety&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.campaignforaisafety.org/&quot;&gt;Campaign for AI Safety&lt;/a&gt; used to do public marketing and outreach to promote concern for AI x-risk. In early 2024, it got rolled in to &lt;a href=&quot;#existential-risk-observatory&quot;&gt;Existential Risk Observatory&lt;/a&gt;, and the former organizers of the Campaign for AI Safety now volunteer for Existential Risk Observatory.&lt;/p&gt;

&lt;p&gt;Campaign for AI Safety still has a donations page, but as far as I can tell, there is no reason to donate to it rather than to Existential Risk Observatory.&lt;/p&gt;

&lt;h2 id=&quot;centre-for-enabling-ea-learning-and-research-ceealar&quot;&gt;Centre for Enabling EA Learning and Research (CEEALAR)&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.ceealar.org/&quot;&gt;CEEALAR&lt;/a&gt; runs the EA Hotel. Recently, it has &lt;a href=&quot;https://forum.effectivealtruism.org/posts/cfn2MMEmpnGjTWpAw/ai-winter-season-at-ea-hotel&quot;&gt;focused&lt;/a&gt; on supporting people who work on AI safety, including technical research and policy.&lt;/p&gt;

&lt;p&gt;Something like the EA Hotel could end up accidentally accelerating AI capabilities, but I’m confident that won’t happen because Greg Colbourn, who runs the EA Hotel, is appropriately cautious about AI (he has advocated for a moratorium on AI development).&lt;/p&gt;

&lt;p&gt;You could make a case that CEEALAR has a large multiplicative impact by supporting AI safety people. That case seems hard to make &lt;em&gt;well&lt;/em&gt;, and in the absence of a strong case, CEEALAR isn’t one of my top candidates.&lt;/p&gt;

&lt;h2 id=&quot;center-for-ai-policy&quot;&gt;Center for AI Policy&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.centeraipolicy.org/&quot;&gt;The Center for AI Policy&lt;/a&gt; is a 501(c)(4) nonprofit designed to influence US policy to reduce existential and catastrophic risks from advanced AI (&lt;a href=&quot;https://forum.effectivealtruism.org/posts/NKNoDtPAfHiMA8bJp/introducing-the-center-for-ai-policy-and-we-re-hiring&quot;&gt;source&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;They are serious about x-risk and well-aligned with my position:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Our current focus is building “stop button for AI” capacity in the US government.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Unlike some other orgs, it’s not bogged down by playing politics. For example, it’s willing to &lt;a href=&quot;https://www.centeraipolicy.org/work/sam-altmans-dangerous-and-unquenchable-craving-for-power&quot;&gt;call out&lt;/a&gt; Sam Altman’s bad behavior; and it &lt;a href=&quot;https://forum.effectivealtruism.org/posts/NKNoDtPAfHiMA8bJp/introducing-the-center-for-ai-policy-and-we-re-hiring#How_does_CAIP_differ_from_other_AI_governance_organizations_&quot;&gt;focuses&lt;/a&gt; on conducting advocacy now, rather than amassing influence that can be used later (I’m generally averse to power-seeking).&lt;/p&gt;

&lt;p&gt;The org has &lt;a href=&quot;https://www.centeraipolicy.org/work/model-legislation-release-april-2024&quot;&gt;proposed&lt;/a&gt; model legislation that makes some non-trivial policy proposals (see &lt;a href=&quot;https://assets.caip.org/caip/RAAIA%20Executive%20Summary%20%28April%202024%29.pdf&quot;&gt;summary pdf&lt;/a&gt; and &lt;a href=&quot;https://assets.caip.org/caip/RAAIA%20%28April%202024%29.pdf&quot;&gt;full text pdf&lt;/a&gt;. The legislation would:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;require the customers buying $30,000 advanced AI chips to fill out a one-page registration form;&lt;/li&gt;
  &lt;li&gt;issue permits to the most advanced AI systems based on the quality of their safety testing;&lt;/li&gt;
  &lt;li&gt;define a reasonable set of emergency powers for the government so that they can intervene
and shut down an AI system that’s in the process of going rogue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a breath of fresh air compared to most of the policy proposals I’ve read (none of which I’ve discussed yet, because I’m writing this list in alphabetical order). Most proposals say things like:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;make the regulation be good instead of bad;&lt;/li&gt;
  &lt;li&gt;simultaneously promote innovation and safety (there is no such thing as a tradeoff);&lt;/li&gt;
  &lt;li&gt;sternly tell AI companies that they need to not be unsafe, or else we will be very upset.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m paraphrasing for humor&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;40&lt;/a&gt;&lt;/sup&gt;, but I don’t think I’m exaggerating—I’ve read proposals from AI policy orgs that were equivalent to these, but phrased more opaquely. (Like nobody explicitly said “we refuse to acknowledge the existence of tradeoffs”, but they did, in fact, refuse to acknowledge the existence of tradeoffs.)&lt;/p&gt;

&lt;p&gt;Center for AI Policy has a target budget of $1.6 million for 2025 (&lt;a href=&quot;https://docs.google.com/document/d/1RzXSYYeUIdAy7I7gJu3DhTxeQqvj8rwteyJUjkFKYY4/&quot;&gt;source&lt;/a&gt;), and its current funding falls considerably short of this goal, so it can make good use of additional money.&lt;/p&gt;

&lt;h2 id=&quot;center-for-ai-safety&quot;&gt;Center for AI Safety&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.safe.ai/&quot;&gt;Center for AI Safety&lt;/a&gt; does safety research and advocates for safety standards. It has a good track record so far:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It drafted the original version of SB 1047.&lt;/li&gt;
  &lt;li&gt;Its &lt;a href=&quot;https://www.safe.ai/work/statement-on-ai-risk&quot;&gt;Statement on AI Risk&lt;/a&gt; got signatures from major figures in AI and helped bring AI x-risk into the &lt;a href=&quot;https://en.wikipedia.org/wiki/Overton_window&quot;&gt;Overton window&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;It’s done some &lt;a href=&quot;https://www.safe.ai/work/2023-impact-report&quot;&gt;other work&lt;/a&gt; (e.g., writing an AI safety textbook; buying compute for safety researchers) that I like but I don’t think is as impactful. The given examples are about supporting alignment research, and as I’ve &lt;a href=&quot;#ai-safety-technical-research-vs-policy&quot;&gt;said&lt;/a&gt;, I’m not as bullish on alignment research.&lt;/li&gt;
  &lt;li&gt;The Center for AI Policy Action Fund does lobbying, which might be good, but I can’t find much public information about what it lobbies for. It did support SB 1047, which is good.&lt;/li&gt;
  &lt;li&gt;It’s led by Dan Hendrycks. I’ve read some of his writings and I get the general sense that he’s competent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Center for AI Safety has some work that I’m very optimistic about (most notably the Statement on AI Risk), but I’m only weakly to moderately optimistic about most of its activities.&lt;/p&gt;

&lt;p&gt;It has received $9 million from Open Philanthropy (&lt;a href=&quot;https://www.openphilanthropy.org/grants/center-for-ai-safety-general-support/&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;https://www.openphilanthropy.org/grants/center-for-ai-safety-general-support-2023/&quot;&gt;2&lt;/a&gt;) and just under $1 million from the &lt;a href=&quot;https://survivalandflourishing.fund/sff-2023-h2-recommendations&quot;&gt;Survival and Flourishing Fund&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I have a good impression of the Center for AI Safety, but it’s not one of my top candidates because (1) it’s already well-funded and (2) it has done some things I really like, but those are diluted by a lot of things I only moderately like.&lt;/p&gt;

&lt;h2 id=&quot;center-for-human-compatible-ai&quot;&gt;Center for Human-Compatible AI&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://humancompatible.ai/&quot;&gt;Center for Human-Compatible AI&lt;/a&gt; does mostly technical research, and some advocacy. To my knowledge, the advocacy essentially consists of Stuart Russell using his influential position to &lt;a href=&quot;https://humancompatible.ai/blog/2023/09/11/stuart-russell-testifies-on-ai-regulation-at-u-s-senate-hearing/#artificial-intelligence:-origins-and-concepts&quot;&gt;advocate&lt;/a&gt; for regulation. While that’s good, I don’t think Stuart Russell is personally funding-constrained, so I don’t think marginal donations to the org will help advocacy efforts.&lt;/p&gt;

&lt;h2 id=&quot;center-for-long-term-resilience&quot;&gt;Center for Long-Term Resilience&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.longtermresilience.org/&quot;&gt;The Center for Long-Term Resilience&lt;/a&gt; is a think tank focused on reducing “extreme risks”, which includes x-risks but also other things. It talks to policy-makers and writes reports. I’ll focus on its reports because those are easier to assess.&lt;/p&gt;

&lt;p&gt;About half of the org’s &lt;a href=&quot;https://www.longtermresilience.org/reports/&quot;&gt;work&lt;/a&gt; relates to AI risk. Some of the AI publications are relevant to x-risk (&lt;a href=&quot;https://www.longtermresilience.org/reports/transforming-risk-governance-at-frontier-ai-companies/&quot;&gt;1&lt;/a&gt;); most are marginally relevant (&lt;a href=&quot;https://www.longtermresilience.org/reports/ai-incident-reporting-addressing-a-gap-in-the-uks-regulation-of-ai/&quot;&gt;2&lt;/a&gt;, &lt;a href=&quot;https://www.longtermresilience.org/reports/why-we-recommend-risk-assessments-over-evaluations-for-ai-enabled-biological-tools-bts/&quot;&gt;3&lt;/a&gt;) or not relevant (&lt;a href=&quot;https://www.longtermresilience.org/reports/the-near-term-impact-of-ai-on-disinformation/&quot;&gt;4&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;I skimmed a few of its reports. Here I will give commentary on two of its reports, starting with the one I liked better.&lt;/p&gt;

&lt;p&gt;I’m reluctant to criticize orgs that I think have good intentions, but I think it’s more important to accurately convey my true beliefs. And my true belief is that these reports are not good (credence: 75%).&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.longtermresilience.org/reports/transforming-risk-governance-at-frontier-ai-companies/&quot;&gt;Transforming risk governance at frontier AI companies&lt;/a&gt; was my favorite report that I saw from the Center for Long-Term Resilience.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;This was the only one of the org’s recent reports that looked meaningfully relevant to x-risk.&lt;/li&gt;
  &lt;li&gt;The report correctly identifies some inadequacies with AI companies’ risk processes. It proposes some high-level changes that I expect would have a positive impact.&lt;/li&gt;
  &lt;li&gt;That said, I don’t think the changes would have a &lt;em&gt;big&lt;/em&gt; impact. The proposal would make more sense for dealing with typical risks that most industries see, but it’s not (remotely) sufficient to prepare for extinction risks. Indeed, the report proposes using “best practice” risk management. Best practice means standard, which means insufficient for x-risk. (And best practice means well-established, which means well-known, which means the marginal value of proposing it is small.)&lt;/li&gt;
  &lt;li&gt;The report implies that we should rely on voluntary compliance from AI companies. It proposes that companies should use external auditors, but not that those auditors should have any real power.&lt;/li&gt;
  &lt;li&gt;An illustrative quote from the Risk Oversight section: “Although they should not make the final decisions, the specialist risk and assurance [advisors] should play a ‘challenger’ role, pressure testing the business’s plans and decisions to ensure they are risk-informed.” I disagree. Risk advisors should have veto power. The CEO should not have unilateral authority to deploy dangerous models.&lt;/li&gt;
  &lt;li&gt;The report has little in the way of concrete recommendations. Most of the recommendations are non-actionable—for example, “build consensus within business and civil society about the importance of more holistic risk management”. Ok, how specifically does one do that?&lt;/li&gt;
  &lt;li&gt;Contrast this with the model legislation from the Center for AI Policy, where the &lt;a href=&quot;https://assets.caip.org/caip/RAAIA%20Executive%20Summary%20%28April%202024%29.pdf&quot;&gt;one-page executive summary&lt;/a&gt; made proposals that were easier to understand, more concrete, and more relevant to x-risk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another example of a report, which I liked less: &lt;a href=&quot;https://www.longtermresilience.org/reports/response-to-establishing-a-pro-innovation-approach-to-regulating-ai/&quot;&gt;Response to ‘Establishing a pro-innovation approach to regulating AI’&lt;/a&gt; (a reply to a &lt;a href=&quot;https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai&quot;&gt;request for proposals&lt;/a&gt; by the UK Office of AI).&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The report makes four high-level proposals, all of which I dislike:
    &lt;ol&gt;
      &lt;li&gt;“Promoting coherence and reducing inefficiencies across the regulatory regime” – Nobody needs to be told to reduce inefficiency. The only reason why any process is inefficient is because people don’t know how to make it more efficient. &lt;em&gt;How exactly&lt;/em&gt; am I supposed to reduce inefficiency? (This quote comes from the executive summary, where I can forgive some degree of vagueness, but the full report does not provide concrete details.)&lt;/li&gt;
      &lt;li&gt;“Ensuring existing regulators have sufficient expertise and capacity” – Again, this is an &lt;a href=&quot;https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights&quot;&gt;applause light&lt;/a&gt;, not a real suggestion. No one thinks regulators should have insufficient expertise or capacity.&lt;/li&gt;
      &lt;li&gt;“Ensuring that regulatory gaps can be identified and addressed” – More of the same.&lt;/li&gt;
      &lt;li&gt;“Being sufficiently adaptive to advances in AI capabilities” – More of the same.&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;The report suggests regulating all AI with a single body rather than diffusely. I like this idea—if a regulatory body is going to prevent x-risk, it probably needs to have broad authority. (Except the report also says “we do not necessarily think [the regulator] needs to be a single body”, which seems to contradict its earlier recommendation.)&lt;/li&gt;
  &lt;li&gt;The report says “It will become increasingly important to distribute responsibility across the entire supply chain of AI development”. I think that’s a good idea if it means restricting sales and exports of compute hardware. But it doesn’t say that explicitly (in fact it provides no further detail at all), and I don’t think policy-makers will interpret it that way.&lt;/li&gt;
  &lt;li&gt;“Recognise that some form of regulation may be needed for general-purpose systems such as foundation models in future.” I would have written this as: “Recognize that strict regulation for general-purpose systems is urgently needed.” Stop downplaying the severity of the situation.&lt;/li&gt;
  &lt;li&gt;If I were writing this report, I would have included evidence/reasoning on why AI risk (x-risk and catastrophic risk) is a major concern, and what this implies about how to regulate it. The report doesn’t include any arguments that could change readers’ minds.&lt;/li&gt;
  &lt;li&gt;In conclusion, this report is mostly vacuous. It contains some non-vacuous proposals in the full text (not represented in the executive summary), but the non-vacuous proposals aren’t particularly concrete and aren’t particularly useful for reducing x-risk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An alternative interpretation is that the Center for Long-Term Resilience wants to build influence by writing long and serious-looking reports that nobody could reasonably disagree with. As I touched on &lt;a href=&quot;#political-diplomacy-vs-advocacy&quot;&gt;previously&lt;/a&gt;, I’m not optimistic about this strategy. I disapprove of deceptive tactics, and I think it’s a bad idea even on naive consequentialist grounds (i.e., it’s not going to work as well as writing actionable reports would). And—perhaps more importantly—if the org’s reports are low quality, then I can’t trust that it does a good job when working with policy-makers.&lt;/p&gt;

&lt;h2 id=&quot;center-for-security-and-emerging-technology-cset&quot;&gt;Center for Security and Emerging Technology (CSET)&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://cset.georgetown.edu/&quot;&gt;Center for Security and Emerging Technology&lt;/a&gt; does work on AI policy along with various other topics. It has received &lt;a href=&quot;https://www.openphilanthropy.org/?s=center+for+security+and+emerging+technology&quot;&gt;$105 million&lt;/a&gt; from Open Philanthropy.&lt;/p&gt;

&lt;p&gt;I wouldn’t donate to CSET because it has so much funding already, but I took a brief look at its publications.&lt;/p&gt;

&lt;p&gt;The research appears mostly tangential or unrelated to x-risk, instead covering subjects like &lt;a href=&quot;https://cset.georgetown.edu/publication/securing-critical-infrastructure-in-the-age-of-ai/&quot;&gt;cybersecurity&lt;/a&gt;, &lt;a href=&quot;https://cset.georgetown.edu/publication/controlling-large-language-models-a-primer/&quot;&gt;deceptive/undesirable LLM output&lt;/a&gt;, and &lt;a href=&quot;https://cset.georgetown.edu/publication/building-the-tech-coalition/&quot;&gt;how the US Department of Defense can use AI to bolster its military power&lt;/a&gt;—this last report seems harmful on balance. Some of its reports (such as &lt;a href=&quot;https://cset.georgetown.edu/publication/enabling-principles-for-ai-governance/&quot;&gt;Enabling Principles for AI Governance&lt;/a&gt;) have the &lt;a href=&quot;#center-for-long-term-resilience&quot;&gt;previously-discussed problem&lt;/a&gt; of being mostly vacuous/non-actionable.&lt;/p&gt;

&lt;p&gt;CSET also works to put researchers into positions where they can directly influence policy (&lt;a href=&quot;https://www.founderspledge.com/research/center-for-security-and-emerging-technology&quot;&gt;source&lt;/a&gt;).&lt;sup id=&quot;fnref:67&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:67&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;41&lt;/a&gt;&lt;/sup&gt; Allegedly, CSET has considerable political influence, but I haven’t identified any visible benefits from that influence (contrast with the &lt;a href=&quot;#center-for-ai-safety&quot;&gt;Center for AI Safety&lt;/a&gt;, which wrote SB 1047). The most legible result I can find is that CSET has collaborated with the Department of Defense; without knowing the details, my prior is that collaborating with DOD is net negative. I would prefer the DOD to be less effective, not more. (Maybe CSET is convincing the DOD not to build military AI but I doubt it; CSET’s reports suggest the opposite.)&lt;/p&gt;

&lt;p&gt;CSET has the same issue as the Center for Long-Term Resilience: if your public outputs are low-quality (or even net harmful), then why should I expect your behind-the-scenes work to be any better?&lt;/p&gt;

&lt;h2 id=&quot;centre-for-long-term-policy&quot;&gt;Centre for Long-Term Policy&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.langsikt.no/en/langsikt&quot;&gt;Centre for Long-Term Policy&lt;/a&gt; operates in Norway and focuses on influencing Norwegian policy on x-risk, longtermism, and global health.&lt;/p&gt;

&lt;p&gt;I didn’t look into it much because I think Norwegian AI policy is unlikely to matter—superintelligent AI will almost certainly not be developed in Norway, so Norwegian regulation has limited ability to constrain AI development.&lt;/p&gt;

&lt;p&gt;From skimming its publications, they mostly cover subjects other than AI x-risk policy.&lt;/p&gt;

&lt;p&gt;The Centre for Long-Term Policy received an undisclosed amount of &lt;a href=&quot;https://www.langsikt.no/en/finansiering&quot;&gt;funding&lt;/a&gt; from Open Philanthropy in 2024.&lt;/p&gt;

&lt;h2 id=&quot;centre-for-the-governance-of-ai&quot;&gt;Centre for the Governance of AI&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.governance.ai/&quot;&gt;Centre for the Governance of AI&lt;/a&gt; does alignment research and policy research. It appears to focus primarily on the former, which, as I’ve &lt;a href=&quot;#ai-safety-technical-research-vs-policy&quot;&gt;discussed&lt;/a&gt;, I’m not as optimistic about. (And I don’t &lt;a href=&quot;#policy-research-vs-policy-advocacy&quot;&gt;like&lt;/a&gt; policy research as much as policy advocacy.)&lt;/p&gt;

&lt;p&gt;Its policy research seems mostly unrelated to x-risk, for example it has multiple reports on AI-driven unemployment (&lt;a href=&quot;https://www.governance.ai/research-paper/scenarios-for-the-transition-to-agi&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;https://www.governance.ai/research-paper/preparing-the-workforce-for-an-uncertain-ai-future&quot;&gt;2&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;My favorite of its published reports is &lt;a href=&quot;https://www.governance.ai/research-paper/lessons-atomic-bomb-ord&quot;&gt;Lessons from the Development of the Atomic Bomb&lt;/a&gt;. It’s written by Toby Ord, who doesn’t work there.&lt;/p&gt;

&lt;p&gt;Centre for the Governance of AI has received &lt;a href=&quot;https://www.openphilanthropy.org/?s=Centre+for+the+Governance+of+AI&quot;&gt;$6 million&lt;/a&gt; from Open Philanthropy.&lt;/p&gt;

&lt;p&gt;The org appears reasonably well-funded. I don’t have major complaints about its work, but (1) the work does not look particularly strong and (2) it doesn’t cover the focus areas that I’m most optimistic about.&lt;/p&gt;

&lt;h2 id=&quot;civai&quot;&gt;CivAI&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://civai.org/&quot;&gt;CivAI&lt;/a&gt; raises awareness about AI dangers by building interactive software to demonstrate AI capabilities, for example &lt;a href=&quot;https://civai.org/cyber-demos&quot;&gt;AI-powered cybersecurity threats&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This org is new which makes it difficult to evaluate. It appears to have the same theory of change as Palisade Research (which I review &lt;a href=&quot;#palisade-research&quot;&gt;below&lt;/a&gt;), but I like Palisade better, for three reasons:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;None of CivAI’s work so far appears relevant to x-risk. For example, its most recent demo focuses on generating fake images for deceptive purposes.&lt;/li&gt;
  &lt;li&gt;I think Palisade’s methods for demonstrating capabilities are more likely to get attention (credence: 65%).&lt;/li&gt;
  &lt;li&gt;I’m more confident in Palisade’s ability to communicate with policy-makers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CivAI does not appear to be seeking donations. There is no option to donate through the website.&lt;/p&gt;

&lt;h2 id=&quot;control-ai&quot;&gt;Control AI&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://controlai.com/&quot;&gt;Control AI&lt;/a&gt; runs advocacy campaigns on AI risk.&lt;/p&gt;

&lt;p&gt;Its &lt;a href=&quot;(https://www.narrowpath.co/)&quot;&gt;current campaign&lt;/a&gt; proposes slowing AI development such that no one develops superintelligence for at least the next 20 years, then using this time to establish a robust system for AI oversight. The campaign includes a &lt;a href=&quot;https://www.narrowpath.co/annexes&quot;&gt;non-vacuous proposal&lt;/a&gt; for the organizational structure of a regulatory body.&lt;/p&gt;

&lt;p&gt;Control AI has a &lt;a href=&quot;https://arxiv.org/abs/2310.20563&quot;&gt;paper&lt;/a&gt; on AI policy that appears reasonable:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It acknowledges that voluntary commitments from AI companies are insufficient.&lt;/li&gt;
  &lt;li&gt;It proposes establishing international regulatory body that (1) imposes a global cap on computing power used to train an AI system and (2) mandates safety evaluations.&lt;/li&gt;
  &lt;li&gt;It proposes that regulators should have the authority to halt model deployment on a model that they deem excessively dangerous.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href=&quot;https://www.narrowpath.co/&quot;&gt;campaign’s proposal&lt;/a&gt; is similar. It lays out the most concrete plan I’ve seen for how to get to a place where we can solve AI alignment.&lt;/p&gt;

&lt;p&gt;I listened to a &lt;a href=&quot;https://futureoflife.org/podcast/andrea-miotti-on-a-narrow-path-to-safe-transformative-ai/&quot;&gt;podcast&lt;/a&gt; with Andrea Miotti, co-founder of Control AI. He mostly covered standard arguments for caring about AI x-risk, but he also made some insightful comments that changed my thinking a bit.&lt;sup id=&quot;fnref:46&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:46&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;42&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;I like the concept of Control AI’s latest campaign, but I don’t know how much impact it will have.&lt;sup id=&quot;fnref:45&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:45&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;43&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Control AI’s past campaigns (&lt;a href=&quot;https://controlai.com/campaign-against-foundation-models&quot;&gt;example&lt;/a&gt;) have received media coverage (&lt;a href=&quot;https://www.euronews.com/next/2023/11/29/europes-ai-act-under-threat-by-lobbyists-experts-and-the-public-say&quot;&gt;example&lt;/a&gt;) and their policy objectives have been achieved, although it’s not clear how much of a causal role Control AI played in achieving those objectives, or what Control AI actually did. Control AI clearly deserves &lt;em&gt;some&lt;/em&gt; credit, or else news outlets wouldn’t cite it.&lt;/p&gt;

&lt;p&gt;Control AI might be as impactful as other advocacy orgs that I like, but I have more uncertainty about it, so it’s not a top candidate. It would be fairly easy to change my mind about this.&lt;/p&gt;

&lt;p&gt;I couldn’t find any information about Control AI’s funding situation, and I didn’t inquire because it wasn’t one of my top candidates.&lt;/p&gt;

&lt;h2 id=&quot;existential-risk-observatory&quot;&gt;Existential Risk Observatory&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.existentialriskobservatory.org/&quot;&gt;Existential Risk Observatory&lt;/a&gt; writes &lt;a href=&quot;https://www.existentialriskobservatory.org/#in-the-media&quot;&gt;media articles&lt;/a&gt; on AI x-risk, does &lt;a href=&quot;https://www.existentialriskobservatory.org/research-2/&quot;&gt;policy research&lt;/a&gt;, and publishes &lt;a href=&quot;https://www.existentialriskobservatory.org/policy-proposals/&quot;&gt;policy proposals&lt;/a&gt; (see &lt;a href=&quot;https://existentialriskobservatory.org/papers_and_reports/Policy%20Proposals.pdf&quot;&gt;pdf&lt;/a&gt; with a summary of proposals).&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It appears to be having some success bringing public attention to x-risk via mainstream media, including advocating for a pause in &lt;a href=&quot;https://time.com/6295879/ai-pause-is-humanitys-best-bet-for-preventing-extinction/&quot;&gt;TIME&lt;/a&gt; (jointly with Joep Meindertsma of PauseAI).&lt;/li&gt;
  &lt;li&gt;Its policy proposals are serious: it proposes implementing an AI pause, tracking frontier AI hardware, and explicitly recognizing extinction risk in regulations.&lt;/li&gt;
  &lt;li&gt;The research mainly focuses on public opinion, for example opinions on AI capabilities/danger (&lt;a href=&quot;https://existentialriskobservatory.org/papers_and_reports/research/AI%20doom%20prevention%20message%20testing%20in%20USA.%20Volume%201.pdf&quot;&gt;pdf&lt;/a&gt;) and message testing on an AI moratorium (&lt;a href=&quot;https://existentialriskobservatory.org/papers_and_reports/research/Test%20of%20narratives%20for%20AGI%20moratorium%20support.pdf&quot;&gt;pdf&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Existential Risk Observatory is small and funding-constrained, so I expect that donations would be impactful.&lt;/p&gt;

&lt;p&gt;My primary concern is that it operates in the Netherlands. Dutch policy is unlikely to have much influence on x-risk—the United States is the most important country by far, followed by China. And a Dutch organization likely has little influence on United States policy. Existential Risk Observatory can still influence public opinion in America (for example via its TIME article), but I expect a US-headquartered org to have a greater impact.&lt;/p&gt;

&lt;h2 id=&quot;future-of-life-institute-fli&quot;&gt;Future of Life Institute (FLI)&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://futureoflife.org/&quot;&gt;FLI&lt;/a&gt; has done some good advocacy work like the &lt;a href=&quot;https://futureoflife.org/open-letter/pause-giant-ai-experiments/&quot;&gt;6-month pause letter&lt;/a&gt; (which &lt;a href=&quot;https://manifold.markets/CalebW/in-2030-will-we-think-flis-6-month&quot;&gt;probably reduced x-risk&lt;/a&gt;). It also has a $400 million endowment, so I don’t think it needs any donations from me.&lt;/p&gt;

&lt;h2 id=&quot;future-society&quot;&gt;Future Society&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://thefuturesociety.org/&quot;&gt;The Future Society&lt;/a&gt; seeks to align AI through better governance. I reviewed some of &lt;a href=&quot;https://thefuturesociety.org/our-work/&quot;&gt;its work&lt;/a&gt;, and it looks almost entirely irrelevant to x-risk.&lt;/p&gt;

&lt;p&gt;Of The Future Society’s recent publications, the most concrete is “List of Potential Clauses to Govern the Development of General Purpose AI Systems” (&lt;a href=&quot;https://thefuturesociety.org/wp-content/uploads/2023/08/List-of-Potential-Clauses_Aug-2023-v.-0.1.pdf&quot;&gt;pdf&lt;/a&gt;). Some notes on this report:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The Future Society collected recommendations from industry staff, independent experts, and engineers from frontier labs. Engineers from frontier labs should not be trusted to produce recommendations, any more than &lt;a href=&quot;https://x.com/LinchZhang/status/1842344867638411764&quot;&gt;petroleum engineers&lt;/a&gt; should be trusted to set climate change policy.&lt;/li&gt;
  &lt;li&gt;The proposals for mitigating harmful behavior are mostly vacuous and in some cases harmful. They largely amount to: keep building dangerous AI, but do a good job of making it safe.&lt;/li&gt;
  &lt;li&gt;“Use the most state-of-the-art editing techniques to erase capabilities and knowledge that are mostly useful for misuse.” That’s not going to work. (Palisade Research has &lt;a href=&quot;https://arxiv.org/abs/2310.20624&quot;&gt;demonstrated&lt;/a&gt; that it’s easy to remove safeguards from LLMs.)&lt;/li&gt;
  &lt;li&gt;“Use state-of-the-art methods and tools for ensuring safety and trustworthiness of models, such as mechanistic interpretability.” This sentence makes me think the authors don’t have a good understanding of AI safety. The state of the art in mechanistic interpretability is nowhere close to being able to ensure the trustworthiness of models. We still have virtually no idea what’s going on inside large neural networks.&lt;/li&gt;
  &lt;li&gt;The report proposes using the same industry-standard risk management model that the Center for Long-Term Resilience &lt;a href=&quot;#center-for-long-term-resilience&quot;&gt;proposed&lt;/a&gt;. The same criticisms apply—this model is obvious enough that you don’t need to propose it, and severely insufficient for mitigating extinction risks.&lt;/li&gt;
  &lt;li&gt;The report proposes “air gapping &amp;amp; sandboxing, no internet access” for powerful models. I feel like I shouldn’t need to explain why that &lt;a href=&quot;https://www.yudkowsky.net/singularity/aibox&quot;&gt;won’t work&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another report (&lt;a href=&quot;https://thefuturesociety.org/wp-content/uploads/2023/09/Executive-Summary-TFS-Sep-2023-Heavy-is-the-Head-that-Wears-the-Crown-risk-based-tiered-approach-to-governing-GPAI.pdf&quot;&gt;pdf&lt;/a&gt;) submitted in response to the EU AI Act discussed seven challenges of “general-purpose AI”. The second challenge is “generalization and capability risks, i.e. capability risks, societal risks and extinction risks”. There is no further discussion of extinction risk, and this is the only place that the word “extinction” appears in any of The Future Society’s materials. (The word “existential” appears a few times, but existential risks are not discussed.)&lt;/p&gt;

&lt;h2 id=&quot;horizon-institute-for-public-service&quot;&gt;Horizon Institute for Public Service&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://horizonpublicservice.org/&quot;&gt;Horizon Institute for Public Service&lt;/a&gt; runs a fellowship where it places people into positions in governments and think tanks. It claims to be reasonably &lt;a href=&quot;https://horizonpublicservice.org/fellow-accomplishments/&quot;&gt;successful&lt;/a&gt;. (I do not have much of an opinion as to how much credit Horizon Institute deserves for its fellows’ accomplishments.)&lt;/p&gt;

&lt;p&gt;Horizon Institute has received an undisclosed amount of &lt;a href=&quot;https://horizonpublicservice.org/about-us/&quot;&gt;funding&lt;/a&gt; from Open Philanthropy (along with some other big foundations).&lt;/p&gt;

&lt;p&gt;Do Horizon fellows care about x-risk, and does their work reduce x-risk in expectation? &lt;a href=&quot;https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362&quot;&gt;Politico alleges&lt;/a&gt; that the Horizon Institute is a clandestine plot to get governments to care more about x-risk. I’m not a fan of clandestine plots, but that aside, should I expect Horizon fellows to reduce x-risk?&lt;/p&gt;

&lt;p&gt;Most of their work is not legible, so I’m skeptical by default. Caring about x-risk is not enough to make me trust you. Some people take totally the wrong lessons from concerns about x-risk (especially AI risk) and end up increasing it instead. Case in point: OpenAI, DeepMind, and Anthropic &lt;a href=&quot;https://www.astralcodexten.com/p/why-not-slow-ai-progress&quot;&gt;all&lt;/a&gt; had founders who cared about AI x-risk, and two of those (OpenAI + Anthropic) were founded with the explicit mission of preventing extinction. And yet OpenAI is probably the #1 worst thing that has ever happened in terms of &lt;em&gt;increasing&lt;/em&gt; x-risk, and DeepMind and Anthropic aren’t much better.&lt;/p&gt;

&lt;p&gt;I reviewed all the highlighted &lt;a href=&quot;https://horizonpublicservice.org/fellow-accomplishments/&quot;&gt;accomplishments&lt;/a&gt; of fellows that looked relevant to AI:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://cltc.berkeley.edu/wp-content/uploads/2023/11/Berkeley-GPAIS-Foundation-Model-Risk-Management-Standards-Profile-v1.0.pdf&quot;&gt;AI risk management standards&lt;/a&gt; for NIST. Only marginally relevant to x-risk, but not bad.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://carnegieendowment.org/posts/2023/09/how-hype-over-ai-superintelligence-could-lead-policy-astray?lang=en&quot;&gt;An article&lt;/a&gt; on how we shouldn’t worry about x-risk (!!).&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://fas.org/publication/creating-auditing-tools-for-ai-equity/&quot;&gt;Auditing tools for AI equity&lt;/a&gt;. Unrelated to x-risk.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.brookings.edu/articles/detecting-ai-fingerprints-a-guide-to-watermarking-and-beyond/&quot;&gt;Detecting AI fingerprints&lt;/a&gt;. Marginally related to x-risk.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://cset.georgetown.edu/publication/autonomous-cyber-defense/&quot;&gt;Autonomous cyber defense&lt;/a&gt;. Increasing the capabilities of cybersecurity AI is plausibly net negative.&lt;sup id=&quot;fnref:31&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:31&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;44&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.lawfaremedia.org/article/eus-ai-act-barreling-toward-ai-standards-do-not-exist&quot;&gt;An article on the EU AI Act&lt;/a&gt;. Non-vacuous and discusses AI risk (not exactly x-risk, but close). Vaguely hints at slowing AI development.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In my judgment after taking a brief look, 3/6 highlighted writings were perhaps marginally useful for x-risk, 1/6 was irrelevant, and 2/6 were likely harmful. None were clearly useful.&lt;/p&gt;

&lt;p&gt;Zvi Mowshowitz &lt;a href=&quot;https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff&quot;&gt;wrote&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;In my model, one should be deeply skeptical whenever the answer to ‘what would do the most good?’ is ‘get people like me more money and/or access to power.’&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I agree, but even beyond that, the Horizon fellows don’t seem to be “people like me”. They include people who are arguing &lt;em&gt;against&lt;/em&gt; caring about x-risk.&lt;/p&gt;

&lt;p&gt;I believe the world would be better off if Horizon Institute did not exist (credence: 55%).&lt;/p&gt;

&lt;p&gt;And if I’m wrong about that, it still looks like Horizon fellows don’t do much work related to x-risk, so the expected value of Horizon Institute is low.&lt;/p&gt;

&lt;h2 id=&quot;institute-for-ai-policy-and-strategy&quot;&gt;Institute for AI Policy and Strategy&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.iaps.ai/&quot;&gt;Institute for AI Policy and Strategy&lt;/a&gt; does policy research, focused on US AI regulations, compute governance, lab governance, and international governance with China.&lt;/p&gt;

&lt;p&gt;I’m more optimistic about advocacy than policy research, so this org is not one of my top candidates. That said, I like it better than most AI policy research orgs. Some observations from briefly reading some of its research:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The proposals are non-vacuous and moderately concrete. For example, from its &lt;a href=&quot;https://ukdayone.org/briefings/assuring-growth-making-the-uk-a-global-leader-in-ai-assurance-technology&quot;&gt;recommendations to the UK government&lt;/a&gt;: “Investing £50m in ‘pull mechanisms’ (pay-outs contingent on achieving specific technological goals, such as prizes, AMCs, and milestone payments).” I don’t know how much that helps with x-risk, but it’s concrete.&lt;/li&gt;
  &lt;li&gt;Almost all of its work focuses on sub-extinction risks. Some of this looks potentially useful for x-risk, for example &lt;a href=&quot;https://www.iaps.ai/research/coordinated-disclosure&quot;&gt;establishing reporting requirements&lt;/a&gt;, or &lt;a href=&quot;https://www.iaps.ai/research/are-consumer-gpus-a-problem-for-us-export-controls&quot;&gt;recognizing the risks associated with exporting GPUs&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.iaps.ai/research/responsible-scaling&quot;&gt;One report&lt;/a&gt; fairly criticizes some shortcomings of Anthropic’s Responsible Scaling Policy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Institute for AI Policy and Strategy has received just under $4 million from Open Philanthropy (&lt;a href=&quot;https://www.openphilanthropy.org/grants/institute-for-ai-policy-strategy-general-support/&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;https://www.openphilanthropy.org/grants/institute-for-ai-policy-and-strategy-general-support-april-2024/&quot;&gt;2&lt;/a&gt;), and is &lt;a href=&quot;https://manifund.org/projects/ai-policy-work--iaps&quot;&gt;seeking additional funding&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;lightcone-infrastructure&quot;&gt;Lightcone Infrastructure&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.lightconeinfrastructure.com/&quot;&gt;Lightcone&lt;/a&gt; runs &lt;a href=&quot;https://www.lesswrong.com/about&quot;&gt;LessWrong&lt;/a&gt; and an office that Lightcone calls “&lt;a href=&quot;https://en.wikipedia.org/wiki/Bell_Labs&quot;&gt;Bell Labs&lt;/a&gt; for longtermism”.&lt;/p&gt;

&lt;p&gt;Lightcone has a detailed case for impact &lt;a href=&quot;https://manifund.org/projects/lightcone-infrastructure&quot;&gt;on Manifund&lt;/a&gt;. In short, Lightcone maintains LessWrong, and LessWrong is upstream of a large quantity of AI safety work.&lt;/p&gt;

&lt;p&gt;I believe Lightcone has high expected value and it can make good use of marginal donations.&lt;/p&gt;

&lt;p&gt;By maintaining LessWrong, Lightcone somewhat improves many AI safety efforts (plus efforts on other beneficial projects that don’t relate to AI safety). If I were very uncertain about what sort of work was best, I might donate to Lightcone as a way to provide diffuse benefits across many areas. But since I believe (a specific sort of) policy work has much higher EV than AI safety research, I believe it makes more sense to fund that policy work directly.&lt;/p&gt;

&lt;p&gt;An illustration with some made-up numbers: Suppose that&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;There are 10 categories of AI safety work.&lt;/li&gt;
  &lt;li&gt;Lightcone makes each of them 20% better.&lt;/li&gt;
  &lt;li&gt;The average AI safety work produces 1 utility point.&lt;/li&gt;
  &lt;li&gt;Well-directed AI policy produces 5 utility points.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then a donation to Lightcone is worth 2 utility points, and my favorite AI policy orgs are worth 5 points. So a donation to Lightcone is better than the average AI safety org, but not as good as good policy orgs.&lt;/p&gt;

&lt;h2 id=&quot;machine-intelligence-research-institute-miri&quot;&gt;Machine Intelligence Research Institute (MIRI)&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://intelligence.org/&quot;&gt;MIRI&lt;/a&gt; used to do exclusively technical research. In 2024, it &lt;a href=&quot;https://intelligence.org/2024/01/04/miri-2024-mission-and-strategy-update/&quot;&gt;pivoted&lt;/a&gt; to focus on policy advocacy—specifically, advocating for &lt;a href=&quot;https://intelligence.org/2024/05/29/miri-2024-communications-strategy/&quot;&gt;shutting down frontier AI development&lt;/a&gt;. MIRI changed its mind at around the same time I &lt;a href=&quot;#ai-safety-technical-research-vs-policy&quot;&gt;changed my mind&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some observations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;MIRI gets considerable credit for being the first to recognize the AI alignment problem.&lt;/li&gt;
  &lt;li&gt;I have a high opinion of the general competence of MIRI employees.&lt;/li&gt;
  &lt;li&gt;Historically, I have agreed with MIRI’s criticisms of most technical alignment approaches, which suggests they have good reasoning processes. (With the caveat that I don’t really understand technical alignment research.)&lt;/li&gt;
  &lt;li&gt;Eliezer Yudkowsky’s &lt;a href=&quot;https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/&quot;&gt;TIME article&lt;/a&gt; publicly argued for AI pause and brought some attention to the issue (both positive and negative). My vibe sense says the article was valuable but who knows.&lt;/li&gt;
  &lt;li&gt;Eliezer personally has a strong track record of influencing (some subset of) people with &lt;a href=&quot;https://www.lesswrong.com/tag/original-sequences&quot;&gt;the LessWrong sequences&lt;/a&gt; and &lt;a href=&quot;https://hpmor.com/&quot;&gt;Harry Potter and the Methods of Rationality&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;I know that MIRI is serious about existential risk and isn’t going to compromise its values.&lt;/li&gt;
  &lt;li&gt;Eliezer believes animals are not moral patients, which is kind of insane but probably not directly relevant. (&lt;a href=&quot;https://slatestarcodex.com/2019/02/26/rule-genius-in-not-out/&quot;&gt;Rule thinkers in, not out.&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;MIRI (or at least Eliezer) says P(doom) &amp;gt; 95%.&lt;sup id=&quot;fnref:64&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:64&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;45&lt;/a&gt;&lt;/sup&gt; Some people say this is crazy high and it makes MIRI want to do dumb stuff like shutting down AI. I do think 95% is too high but I think most people are kind of crazy about probability—they treat probabilities less than 50% as essentially 0%. Like if your P(doom) is 40%, you should be doing the same thing that MIRI is doing.&lt;sup id=&quot;fnref:65&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:65&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;46&lt;/a&gt;&lt;/sup&gt; You should not be trying to develop AI as fast as possible while funding a little safety research on the side.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MIRI’s new communications strategy has produced few results so far. We know that MIRI is &lt;a href=&quot;https://intelligence.org/2024/05/29/miri-2024-communications-strategy/&quot;&gt;working on&lt;/a&gt; a new website that explains the case for x-risk; a book; and an online reference. It remains to be seen how useful these will be. They don’t seem like &lt;em&gt;obviously&lt;/em&gt; good ideas to me,&lt;sup id=&quot;fnref:68&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:68&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;47&lt;/a&gt;&lt;/sup&gt; but I expect MIRI will correct course if a strategy isn’t working.&lt;/p&gt;

&lt;p&gt;Until recently, MIRI was not seeking funding because it received some large cryptocurrency donations in 2021ish. Now it’s started fundraising again to pay for its new policy work.&lt;/p&gt;

&lt;p&gt;I consider MIRI a top candidate. It only recently pivoted to advocacy so there’s not much to retrospectively evaluate, but I expect its work to be impactful.&lt;/p&gt;

&lt;h2 id=&quot;manifund&quot;&gt;Manifund&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://manifund.org/&quot;&gt;Manifund&lt;/a&gt; does not do anything directly related to AI policy. It’s a fundraising platform. But I’m including it in this list because I’m impressed by how it’s changed the funding landscape.&lt;/p&gt;

&lt;p&gt;Many orgs have written fundraising pitches on Manifund. And for whatever reason, some of these pitches are &lt;em&gt;way&lt;/em&gt; higher quality than what I’m used to. I’m not sure why—maybe Manifund’s prompt questions draw out good answers.&lt;/p&gt;

&lt;p&gt;For example, originally I was skeptical that donations to Lightcone Infrastructure could be competitive with top charities, but its &lt;a href=&quot;https://manifund.org/projects/lightcone-infrastructure&quot;&gt;Manifund page&lt;/a&gt; changed my mind. I donated $200 just as a reward for the excellent writeup.&lt;/p&gt;

&lt;p&gt;Many of the orgs on my list (especially the smaller ones) wrote detailed pitches on Manifund that helped me decide where to donate. Manifund deserves part of the credit for that.&lt;/p&gt;

&lt;p&gt;Manifund is free to use, but it sometimes asks large donors to give a percentage of their donations to cover its operating costs. Manifund didn’t ask me to do that, so I didn’t.&lt;/p&gt;

&lt;h2 id=&quot;model-evaluation-and-threat-research-metr&quot;&gt;Model Evaluation and Threat Research (METR)&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://metr.org&quot;&gt;METR&lt;/a&gt; evaluates large AI models to look for potentially dangerous capabilities. Its most obvious theory of change—where it finds a scary result and then the AI company pauses development—mainly depends on (1) AI companies giving access to METR (which they often &lt;a href=&quot;https://www.lesswrong.com/posts/yHFhWmu3DmvXZ5Fsm/clarifying-metr-s-auditing-role&quot;&gt;don’t&lt;/a&gt;) and (2) AI companies ceasing model development when METR establishes harmful capabilities (which they probably won’t—if there’s any ambiguity, they will likely choose the interpretation that lets them keep making more money).&lt;/p&gt;

&lt;p&gt;There’s an indirect but more promising theory of change where METR demonstrates a template for capability evaluation which policy-makers then rely on to impose safety regulations. To that end, METR has engaged with NIST’s AI risk management framework (&lt;a href=&quot;https://downloads.regulations.gov/NIST-2024-0001-0075/attachment_2.pdf&quot;&gt;pdf&lt;/a&gt;). This sounds potentially promising but it’s not where I would put money on the margin because:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;I &lt;a href=&quot;#slow-nuanced-regulation-vs-fast-coarse-regulation&quot;&gt;don’t think we should wait&lt;/a&gt; to figure out a solid evaluation framework before writing regulations.&lt;/li&gt;
  &lt;li&gt;Evaluations are helpful if we want to conditionally pause AI in the future, but not relevant if we want to unconditionally pause AI right now, and I believe we should do the latter.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;palisade-research&quot;&gt;Palisade Research&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://palisaderesearch.org/&quot;&gt;Palisade&lt;/a&gt; builds demonstrations of the offensive capabilities of AI systems, with the goal of illustrating risks to policy-makers.&lt;/p&gt;

&lt;p&gt;Some thoughts:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Demonstrating capabilities is probably a useful persuasion strategy.&lt;/li&gt;
  &lt;li&gt;Palisade has done some good work, like &lt;a href=&quot;https://arxiv.org/abs/2311.00117&quot;&gt;removing safety fine-tuning&lt;/a&gt; from Meta’s LLM.&lt;/li&gt;
  &lt;li&gt;I know some of the Palisade employees and I believe they’re competent.&lt;/li&gt;
  &lt;li&gt;Historically, Palisade has focused on building out tech demos. I’m not sure how useful this is for x-risk, since you can’t demonstrate existentially threatening capabilities until it’s too late.  Hopefully, Palisade’s audience can extrapolate from the demos to see that extinction is a serious concern.&lt;/li&gt;
  &lt;li&gt;Soon, Palisade plans to shift from primarily building demos to primarily using those demos to persuade policy-makers.&lt;/li&gt;
  &lt;li&gt;Palisade has a smallish team and has reasonable room to expand.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Palisade has not been actively fundraising, but I believe it can put funding to good use—it has limited runway and wants to hire more people.&lt;/p&gt;

&lt;p&gt;I think the work on building tech demos has rapidly diminishing utility, but Palisade is &lt;a href=&quot;https://palisaderesearch.org/hiring&quot;&gt;hiring&lt;/a&gt; for more policy-oriented roles, so I believe that’s mostly where marginal funding will go.&lt;/p&gt;

&lt;h2 id=&quot;pauseai-global&quot;&gt;PauseAI Global&lt;/h2&gt;

&lt;p&gt;(PauseAI Global and PauseAI US share the same mission and used to be part of the same org, so most of my comments on PauseAI Global also apply to PauseAI US.)&lt;/p&gt;

&lt;p&gt;From &lt;a href=&quot;https://manifund.org/projects/pauseai-local-communities---volunteer-stipends&quot;&gt;Manifund&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;&lt;a href=&quot;https://pauseai.info&quot;&gt;PauseAI&lt;/a&gt; is a grassroots community of volunteers which aim to inform the public and politicians about the risks from superhuman AI and urge them to work towards an international treaty that prevents the most dangerous AI systems from being developed.&lt;/p&gt;

  &lt;p&gt;PauseAI is largely organised through local communities which take actions to spread awareness such as letter writing workshops, peaceful protests, flyering and giving presentations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Historically, I’ve been skeptical of public protests. I think people mainly protest because it’s fun and it makes them feel like they’re contributing, not because it actually helps.&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;48&lt;/a&gt;&lt;/sup&gt; But PauseAI has been appropriately thoughtful (&lt;a href=&quot;https://forum.effectivealtruism.org/posts/Y4SaFM5LfsZzbnymu/the-case-for-ai-safety-advocacy-to-the-public&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;https://pauseai.info/feasibility&quot;&gt;2&lt;/a&gt;) about whether and when protests work, and it makes a reasonable case that protesting can be effective.&lt;/p&gt;

&lt;p&gt;(See also the &lt;a href=&quot;https://www.socialchangelab.org/_files/ugd/503ba4_052959e2ee8d4924934b7efe3916981e.pdf&quot;&gt;Protest Outcomes&lt;/a&gt; report by Social Change Lab. The evidence for the effectiveness of protests is a bit stronger than I expected.&lt;sup id=&quot;fnref:66&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:66&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;49&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;I’m skeptical of the evidence because I don’t trust sociology research (it has approximately the worst replication record of any field). But I like PauseAI because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Approximately zero percent of AI dollars go to AI safety, but approximately zero percent of AI safety dollars go to public advocacy.&lt;/li&gt;
  &lt;li&gt;Polls suggest that there’s widespread public support for pausing AI, and PauseAI has a good shot at converting that public support into policy change.&lt;/li&gt;
  &lt;li&gt;The people running PauseAI seem to have a good idea of what they’re doing, and it’s apparent that they are seriously concerned about existential risk (for most AI policy orgs, I can’t tell whether they care).&lt;/li&gt;
  &lt;li&gt;My impression is that the PauseAI founders went through a similar reasoning process as &lt;a href=&quot;#cause-prioritization&quot;&gt;I did&lt;/a&gt;, and concluded that public advocacy was the most promising approach.&lt;/li&gt;
  &lt;li&gt;I’ve listened to interviews and read articles by leaders from a number of AI policy orgs, and I like the vibes of the PauseAI leaders the best. Many people working in AI safety have &lt;a href=&quot;https://www.econlib.org/archives/2016/01/the_invisible_t.html&quot;&gt;missing moods&lt;/a&gt;, but the PauseAI people do not. I don’t put too much weight on vibes, but they still get nonzero weight.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Broadly speaking, I’m a little more optimistic about advocacy toward policy-makers than advocacy toward the public, simply because it’s more targeted. But PauseAI is still a top candidate because its approach is exceptionally neglected.&lt;/p&gt;

&lt;p&gt;PauseAI Global has no full-time employees; it focuses on supporting volunteers who run protests.&lt;/p&gt;

&lt;h2 id=&quot;pauseai-us&quot;&gt;PauseAI US&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://manifund.org/projects/pauseai-us-2025-through-q2&quot;&gt;PauseAI US&lt;/a&gt; organizes protests to advocate for pausing AI development.&lt;/p&gt;

&lt;p&gt;Unlike PauseAI Global which has no full-time employees, PauseAI US has a small full-time staff who run protests and political lobbying efforts. I like PauseAI US a little better than PauseAI Global because most major AI companies are headquartered in the US, so I expect a US-based org to have more potential for impact.&lt;/p&gt;

&lt;p&gt;PauseAI US also does grassroots lobbying (e.g., organizing volunteers to write letters to Congress) and direct lobbying (talking to policy-makers).&lt;/p&gt;

&lt;p&gt;Grassroots lobbying makes sense as a neglected intervention. Direct lobbying isn’t quite as neglected but it’s still one of my favorite interventions. PauseAI US only has a single lobbyist right now, Felix De Simone. He’s more junior than the lobbyists at some other policy orgs, but based on what I know of his &lt;a href=&quot;https://forum.effectivealtruism.org/posts/aYxuFeCcqRvaszHPb/ama-pauseai-us-needs-money-ask-founder-exec-dir-holly-elmore?commentId=HXkktn8NsdrEsxiPW&quot;&gt;background&lt;/a&gt;, I expect him to do a competent job.&lt;sup id=&quot;fnref:72&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:72&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;50&lt;/a&gt;&lt;/sup&gt; PauseAI US is performing well on obvious surface-level metrics like “number of meetings with Congressional offices per person per month”.&lt;/p&gt;

&lt;h2 id=&quot;sentinel-rapid-emergency-response-team&quot;&gt;Sentinel rapid emergency response team&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://sentinel-team.org/&quot;&gt;Sentinel&lt;/a&gt; monitors world events for potential precursors to catastrophes. It publishes a &lt;a href=&quot;https://sentinel-team.org/#latest&quot;&gt;weekly newsletter&lt;/a&gt; with events of interest (such as “Iran launched a ballistic missile attack on Israel” or “Two people in California have been infected with bird flu”).&lt;/p&gt;

&lt;p&gt;Sentinel’s mission is to alert relevant parties so that looming catastrophes can be averted before they happen.&lt;/p&gt;

&lt;p&gt;You can read more information on Sentinel’s &lt;a href=&quot;https://manifund.org/projects/fund-sentinel-for-q4-2024&quot;&gt;Manifund page&lt;/a&gt; (short) and &lt;a href=&quot;https://docs.google.com/document/d/18GWF0pVy5X7M_0e3l49Ze82aqHvVurVpsLm63htmc64/&quot;&gt;fundraising memo&lt;/a&gt; (long).&lt;/p&gt;

&lt;p&gt;Some thoughts:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I believe almost nobody would do a good job of running Sentinel because it’s hard to identify early warning signals of catastrophes. But Sentinel is run by members of &lt;a href=&quot;https://samotsvety.org/&quot;&gt;Samotevsky Forecasting&lt;/a&gt;, who I expect to be unusually good at this.&lt;/li&gt;
  &lt;li&gt;The value of Sentinel depends on who’s paying attention to its reports. I don’t know who’s paying attention to its reports.&lt;/li&gt;
  &lt;li&gt;Sentinel isn’t immediately relevant to AI policy, but it could be extremely valuable in certain situations. Namely, it could provide early warning if AI x-risk rapidly increases due to some series of events.&lt;/li&gt;
  &lt;li&gt;AI x-risk aside, I still think Sentinel has high EV because it potentially significantly reduces catastrophic risk for a small budget. Without having investigated those cause areas, Sentinel is tentatively my #1 donation pick for reducing nuclear and biological x-risks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sentinel currently has four team members working part-time. With additional funding, its members could work full-time and it could hire more members and therefore do more comprehensive monitoring.&lt;/p&gt;

&lt;h2 id=&quot;simon-institute-for-longterm-governance&quot;&gt;Simon Institute for Longterm Governance&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.simoninstitute.ch/&quot;&gt;Simon Institute&lt;/a&gt; supports policies to improve coordination, reduce global catastrophic risks, and embed consideration for future generations. It specifically focuses on influencing United Nations policy. (See the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/aqwyGuJkZbnpjt3TR/update-on-the-simon-institute-year-one&quot;&gt;Year One Update&lt;/a&gt; from 2022.)&lt;/p&gt;

&lt;p&gt;Most of the org’s work appears not very relevant to x-risk. For example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It &lt;a href=&quot;https://www.simoninstitute.ch/blog/post/the-windfall-trust-workshop-exploring-potential-pathways-for-benefit-sharing-redistributing-ai-profits/&quot;&gt;co-hosted a workshop&lt;/a&gt; on how AI companies might redistribute their profits (via the &lt;a href=&quot;https://futureoflife.org/project/the-windfall-trust/&quot;&gt;Windfall Trust&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;It &lt;a href=&quot;https://www.simoninstitute.ch/blog/post/nov-25-30-preparedness-bwc-meeting-of-states-parties/&quot;&gt;co-developed a table-top exercise&lt;/a&gt; on pandemic preparedness.&lt;/li&gt;
  &lt;li&gt;It &lt;a href=&quot;https://www.simoninstitute.ch/blog/post/response-to-revision-1-of-the-global-digital-compact-implications-for-ai-governance/&quot;&gt;proposed some minor changes&lt;/a&gt; to a UN &lt;a href=&quot;https://www.un.org/techenvoy/global-digital-compact&quot;&gt;document&lt;/a&gt; on AI governance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This work seems reasonably good, but not as high-impact as work that directly targets x-risk reduction.&lt;/p&gt;

&lt;h2 id=&quot;stop-ai&quot;&gt;Stop AI&lt;/h2&gt;

&lt;p&gt;Like PauseAI, &lt;a href=&quot;https://www.stopai.info/&quot;&gt;Stop AI&lt;/a&gt; protests the development of superintelligent AI. Unlike PauseAI, Stop AI uses disruptive tactics like blocking entrances to OpenAI offices and blocking traffic.&lt;/p&gt;

&lt;p&gt;This is a more high-variance strategy. I find it plausible that Stop AI’s tactics are especially effective, but also likely that its tactics will backfire and decrease public support. So in the absence of some degree of supporting evidence, I’m inclined not to support Stop AI.&lt;/p&gt;

&lt;p&gt;Stop AI’s &lt;a href=&quot;https://docs.google.com/document/d/1IgTaTMTZuY3kRLZ5JDFhwqQLATWVVGLOHVulbBY5O6g/&quot;&gt;proposal&lt;/a&gt; seems overreaching (it wants to &lt;em&gt;permanently&lt;/em&gt; ban AGI development) and it makes weak arguments.&lt;/p&gt;

&lt;p&gt;From listening to an &lt;a href=&quot;https://lironshapira.substack.com/p/getting-arrested-for-barricading&quot;&gt;interview&lt;/a&gt;, I get the impression that the Stop AI founders aren’t appropriately outcome-oriented and don’t have a well-formulated theory of change. In the interview, they would offer reasoning for why they took a particular action and then when the interviewer would point out how that justification doesn’t explain their behavior, they would switch to a different explanation. An example (paraphrased):&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Stop AI: We blocked an entrance to OpenAI’s office to make it harder for employees to build AGI. We feel that this is necessary to stop OpenAI from killing everyone.&lt;/p&gt;

  &lt;p&gt;Interviewer: Then why did you block traffic, since that affects innocent bystanders, not the people building AGI?&lt;/p&gt;

  &lt;p&gt;Stop AI: We need to block traffic to raise awareness.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This pattern occurred a few times. They have reasonable concerns about the dangers of AI, but they don’t seem to have a good justification for why disruptive protests are the best way to handle those concerns.&lt;/p&gt;

&lt;p&gt;(I can see an argument for blocking entrances to AI company offices, but I think the argument for blocking traffic is much weaker.)&lt;/p&gt;

&lt;p&gt;In short, Stop AI is spiritually similar to PauseAI but with worse reasoning, worse public materials, and worse tactics.&lt;/p&gt;

&lt;h1 id=&quot;where-im-donating&quot;&gt;Where I’m donating&lt;/h1&gt;

&lt;p&gt;My top candidates:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;AI Safety and Governance Fund&lt;/li&gt;
  &lt;li&gt;PauseAI US&lt;/li&gt;
  &lt;li&gt;Center for AI Policy&lt;/li&gt;
  &lt;li&gt;Palisade&lt;/li&gt;
  &lt;li&gt;MIRI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A classification of every other org I reviewed:&lt;sup id=&quot;fnref:76&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:76&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;51&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good but not funding-constrained:&lt;/strong&gt; Center for AI Safety, Future of Life Institute&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Would fund if I had more money:&lt;/strong&gt; Control AI, Existential Risk Observatory, Lightcone Infrastructure, PauseAI Global, Sentinel&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Would fund if I had a lot more money, but might fund orgs in other cause areas first:&lt;/strong&gt;&lt;sup id=&quot;fnref:32&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:32&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;52&lt;/a&gt;&lt;/sup&gt; AI Policy Institute, CEEALAR, Center for Human-Compatible AI, Manifund&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Might fund if I had a lot more money:&lt;/strong&gt; AI Standards Lab, Centre for the Governance of AI, Centre for Long-Term Policy, CivAI, Institute for AI Policy and Strategy, METR, Simon Institute for Longterm Governance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Would not fund:&lt;/strong&gt; Center for Long-Term Resilience, Center for Security and Emerging Technology, Future Society, Horizon Institute for Public Service, Stop AI&lt;/p&gt;

&lt;h2 id=&quot;prioritization-within-my-top-five&quot;&gt;Prioritization within my top five&lt;/h2&gt;

&lt;p&gt;Here’s why I ordered my top five the way I did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#1: AI Safety and Governance Fund&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is my top choice because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It could greatly improve the value of future communications efforts.&lt;/li&gt;
  &lt;li&gt;It’s cheap, which means it’s cost-effective.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It would drop down the list quickly if it received more funding, but right now it’s #1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#2: PauseAI US&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I would expect advocacy toward policy-makers to be more impactful than public advocacy if they had similar levels of funding (credence: 60%). But pause protests are extremely neglected, so I believe they’re the most promising strategy on the margin. And PauseAI US is my favorite org doing protests because it operates in the United States and it appears appropriately competent and thoughtful.&lt;/p&gt;

&lt;p&gt;Protests are especially unpopular among institutional funders, which makes them more promising for individual donors like me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#3: Center for AI Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is one of only three orgs (along with Palisade and PauseAI US) that meet four criteria:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;works to persuade policy-makers&lt;/li&gt;
  &lt;li&gt;focuses on AI x-risk over other less-important AI safety concerns&lt;sup id=&quot;fnref:69&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:69&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;53&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;focuses on United States policy&lt;sup id=&quot;fnref:70&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:70&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;54&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;is funding-constrained&lt;sup id=&quot;fnref:71&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:71&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;55&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I’m nearly indifferent between Center for AI Policy and Palisade. I slightly prefer the former because (1) its employees have more experience in politics and (2) its mission/messaging seems less palatable to institutional funders so I expect it to have a harder time raising money.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#4: Palisade&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Palisade meets the same four criteria as Center for AI Policy. As a little twist, Palisade also builds tech demos with the purpose of demonstrating the dangers of AI to policy-makers. Those demos might help or they might not be worth the effort—both seem equally likely to me—so this twist doesn’t change my expectation of Palisade’s cost-effectiveness. I only slightly favor Center for AI Policy for the two reasons mentioned previously.&lt;/p&gt;

&lt;p&gt;I personally know people at Palisade, which I think biases me in its favor, and I might put Palisade at #3 if I wasn’t putting in mental effort to resist that bias.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#5: MIRI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MIRI plans to target a general audience, &lt;del&gt;not policy-makers&lt;/del&gt; (update 2024-11-20: see correction below). That means they can reach more people but it’s also lower leverage. My guess is that targeting a general audience is worse on balance.&lt;/p&gt;

&lt;p&gt;I put PauseAI US higher than the two lobbying orgs because it has such a small budget. Like PauseAI US, MIRI’s strategies are also neglected, but considerably less so.&lt;sup id=&quot;fnref:74&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:74&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;56&lt;/a&gt;&lt;/sup&gt; I expect policy-maker outreach to be more effective than MIRI’s approach (credence: 60%).&lt;/p&gt;

&lt;p&gt;Lest I give the wrong impression, MIRI is still my #5 candidate out of 28 charities. I put it in the top five because I have a high opinion of MIRI leadership—I expect them to have reasonable prioritization and effective execution.&lt;/p&gt;

&lt;p&gt;CORRECTION: MIRI’s technical governance team does research to inform policy, and MIRI has spoken to policy-makers in the US government. This bumps up my evaluation of the org but I’m keeping it at #5 because working with policy-makers is only one part of MIRI’s overall activities.&lt;/p&gt;

&lt;h2 id=&quot;where-im-donating-this-is-the-section-in-which-i-actually-say-where-im-donating&quot;&gt;Where I’m donating (this is the section in which I actually say where I’m donating)&lt;/h2&gt;

&lt;p&gt;I agree with the standard argument that small donors should give all their money to their #1 favorite charity. That’s how I’ve done it &lt;a href=&quot;https://mdickens.me/donations/&quot;&gt;in the past&lt;/a&gt;, but this year I’m splitting my donations a little bit. I plan on donating:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;$5,000 to AI Safety and Governance Fund&lt;/li&gt;
  &lt;li&gt;$5,000 to PauseAI Global&lt;sup id=&quot;fnref:60&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:60&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;57&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;$30,000 to PauseAI US&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s why I’m splitting my donations:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;AI Safety and Governance Fund is small, and I don’t want to represent too big a portion of its budget.&lt;/li&gt;
  &lt;li&gt;I donated to PauseAI Global before writing this post, and my prioritization changed somewhat after writing it.&lt;sup id=&quot;fnref:60:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:60&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;57&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;That leaves PauseAI US as my top candidate, so the rest of my donations will go there.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I already donated $5,000 to PauseAI Global, but I haven’t made the other donations yet, so commenters have a chance to convince me to change my mind.&lt;/p&gt;

&lt;p&gt;If you wish to persuade me privately (or otherwise discuss in private), you can email me at &lt;a href=&quot;mailto:donations@mdickens.me&quot;&gt;donations@mdickens.me&lt;/a&gt; or &lt;a href=&quot;https://forum.effectivealtruism.org/users/michaeldickens&quot;&gt;message me&lt;/a&gt; on the EA Forum.&lt;/p&gt;

&lt;h1 id=&quot;changelog&quot;&gt;Changelog&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;2024-11-20: Provide a correction for some incorrect information about MIRI.&lt;/li&gt;
  &lt;li&gt;2025-04-25: Change some bolded sections into headings so they can be linked to.&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At least the highly effective kinds of animal welfare. Things like animal shelters get a lot of funding but they’re not highly effective. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I believe humans are the most important species, but only because we will shape the future, not because we matter innately more.&lt;/p&gt;

      &lt;p&gt;To be precise, I believe any individual human’s welfare probably innately matters more than an individual animal of any other species. But there are so many more animals than humans that animals matter much more in aggregate. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:36&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I did change my mind in one relevant way—I used to think AI policy advocacy was very unlikely to work, and now I think it has a reasonable chance of working. More on this &lt;a href=&quot;#ai-safety-technical-research-vs-policy&quot;&gt;later&lt;/a&gt;. &lt;a href=&quot;#fnref:36&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:54&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;So maybe I was being reasonable before and now I’m over-weighting AI risk because I’m worried about getting killed by AI? If I’m being irrational right now, how would I know? &lt;a href=&quot;#fnref:54&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I studied computer science in university and I took three AI classes: Intro to AI, Intro to Machine Learning, and Convolutional Neural Networks for Natural Language Processing. My grades were below average but not terrible. &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:47&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some alignment researchers think we’re not that far away from solving AI alignment. I won’t go into detail because I don’t think I can do a great job of explaining my views. An informed alignment researcher could probably write &amp;gt;100,000 words detailing the progress in various subfields and predicting future progress to predict how close we are to solving alignment—something like &lt;a href=&quot;https://www.alignmentforum.org/posts/zaaGsFBeDTpCsYHef/shallow-review-of-live-agendas-in-alignment-and-safety&quot;&gt;this&lt;/a&gt; or &lt;a href=&quot;https://www.alignmentforum.org/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is&quot;&gt;this&lt;/a&gt;, but with more analysis and prediction—and some other informed alignment researcher could do the same thing and come up with a totally different answer.&lt;/p&gt;

      &lt;p&gt;Feel free to change the numbers on my &lt;a href=&quot;https://squigglehub.org/models/mdickens/ai-research-vs-policy&quot;&gt;quantitative model&lt;/a&gt; and write a comment about what answer you got. &lt;a href=&quot;#fnref:47&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Like if “just use reinforcement learning to teach the AI to be ethical” turns out to work. (Which I doubt, but some people seem to think it will work so idk.) &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I don’t remember exactly what I used to believe. Maybe something like, “AI policy advocacy could be a good idea someday once AI looks more imminent and there’s more political will, but it’s not a good idea right now because people will think you’re crazy.” &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;“Replace Gavin Newsom with a governor with &lt;a href=&quot;https://www.astralcodexten.com/p/sb-1047-our-side-of-the-story&quot;&gt;more integrity&lt;/a&gt;” might be an effective intervention, but probably not cost-effective—there’s already too much money in state elections. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:38&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I realized midway through writing this post that I had made this major cause prioritization decision without even making up some numbers and slapping them into a Monte Carlo simulation, which was very out of character for me. &lt;a href=&quot;#fnref:38&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:63&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;More accurately, I believe we live in one of two worlds:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Prosaic alignment works, in which case we will probably not have any lack of funding for AI alignment, and my marginal donation has a small chance of making a difference.&lt;/li&gt;
        &lt;li&gt;Alignment is hard, in which case we will probably not have nearly enough funding (assuming no regulation), and my marginal donation has a small chance of making a difference.&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;And really there’s a continuum between those, so there’s a small chance that AI alignment is at just the right level of difficulty for marginal donations to make a difference. &lt;a href=&quot;#fnref:63&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;There’s a lot of spending on so-called safety research that’s really fake safetywashing research. I wouldn’t count that as part of my estimate. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:39&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For example, with the default inputs, the cost to solve alignment is set to a log-normal distribution with 25th/75th percentiles at $1 billion and $1 trillion. If you tighten the distribution to $10 billion to $1 trillion, the marginal value of spending on alignment research drops to ~0. &lt;a href=&quot;#fnref:39&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’ve met individual researchers who said this and I don’t think they were lying, but I think their beliefs were motivated by a desire to build SOTA models because SOTA models are cool. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m slightly concerned that I shouldn’t pick on Anthropic because they’re the least unethical of the big AI companies (as far as I can tell). But I think when you’re building technology that endangers the lives of every sentient being who lives and who ever will live, you should be held to an extremely high standard of honesty and communication, and Anthropic falls &lt;em&gt;embarrassingly, horrifyingly short&lt;/em&gt; of that standard. As they say, reality does not grade on a curve. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:41&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I would actually go further than that—I think the &lt;em&gt;best&lt;/em&gt; types of alignment research don’t require SOTA models. But that’s more debatable, and it’s not required for my argument. &lt;a href=&quot;#fnref:41&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Suppose conservatively that the lightcone will be usable for another billion years, and that we need to delay superintelligent AI by 100 years to make it safe. The volume of the lightcone is proportional to time cubed. Therefore, assuming a constant rate of expansion, delaying 100 years means we can only access 99.99997% of the lightcone instead of 100%. Even at an incredibly optimistic P(doom) (say, 0.001%), accelerating AI isn’t worth it on a naive longtermist view. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m making an ad hominem argument on purpose. Altman’s arguments seem bad to me, maybe I’m missing something because he understands AI better than I do, but in fact it looks like the better explanation isn’t that I’m missing something, but that Altman is genuinely making bad arguments because his reasoning is motivated—and he’s a known liar so I’m perfectly happy to infer that he’s lying about this issue too. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:75&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Perhaps I’m using a circular definition of “respectable” but I don’t consider someone respectable (on this particular issue) if they estimate P(doom) at &amp;lt;1%. &lt;a href=&quot;#fnref:75&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I put roughly 80% credence on each of five arguments. If each argument is independent, that means I’m probably wrong about at least one of them, but I don’t think they’re independent.&lt;/p&gt;

      &lt;p&gt;And some of the arguments are more decisive than others. For example, if I’m wrong about the opportunity cost argument (#4), then I should switch sides. But if I’m wrong about the hardware overhang argument (#1) and overhang is indeed a serious concern, that doesn’t necessarily mean we shouldn’t slow AI development, it just means a slowdown improves safety in one way and harms safety in another way, and it’s not immediately clear which choice is safer. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:42&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At least for some reasons people cite for not wanting to pause, they should agree with me on this. There are still some counter-arguments, like “we can’t delay AI because that delays the glorious transhumanist future”, but I consider those to be the weakest arguments. &lt;a href=&quot;#fnref:42&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;You could argue that pseudoephedrine is over-regulated, and in fact I would agree. But I don’t think those regulations are a particularly big problem, either. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:52&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Companies make more money from more powerful models, and powerful models are more dangerous. Power and safety directly trade off against each other until you can figure out how to build powerful models that aren’t dangerous—which means you need to solve alignment first. &lt;a href=&quot;#fnref:52&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:40&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In some sense, if we get strong regulations, the companies win, because all the companies’ employees and shareholders don’t get killed by unfriendly AI. But they’ll be unhappy in the short term because they irrationally prioritize profit over not getting killed.&lt;/p&gt;

      &lt;p&gt;I don’t understand what’s going on here psychologically—according to the expressed beliefs of people like Dario Amodei and Shane Legg, they’re massively endangering their own lives in exchange for profit. It’s not even that they disagree with me about key facts, they’re just doing things that make no sense according to their own (expressed) beliefs. &lt;a href=&quot;#fnref:40&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m inclined to say it doesn’t matter at all, but some smart alignment researchers think it does, and they know more than me. Changing my mind wouldn’t materially change my argument here.&lt;/p&gt;

      &lt;p&gt;Paul Christiano, who ~invented RLHF, seems to believe that it is not a real solution to alignment, but improvements on the method might lead to a solution. (He wrote something like this in a comment I read a few days ago that now I can’t find.)&lt;/p&gt;

      &lt;p&gt;On the other hand, RLHF makes the AI look more aligned even if it isn’t, and this might hurt by misleading people into thinking it’s aligend, and they proceed with expanding capabilities when really they shouldn’t.&lt;/p&gt;

      &lt;p&gt;RLHF also makes LLMs less likely to say PR-damaging things. Without RLHF, AI companies might develop LLMs more cautiously out of fear of PR incidents. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This doesn’t make sense as a crux because an RSP also creates a hardware overhang if it triggers, but I’ve already talked about why I dislike the “hardware overhang” argument in general. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;“[thing I don’t understand] must be simple, right?” -famous last words &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At least that’s what I thought when I first wrote this sentence, before I had looked into any AI policy orgs. After having looked into them, I found it pretty easy to strike some orgs off the list. I don’t know what conversations orgs are having with policy-makers and how productive those conversations are, but I can read their public reports, and I can tell when their public reports aren’t good. And if their reports aren’t good, they probably don’t do a good job of influencing policy-makers either. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:57&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;ul&gt;
        &lt;li&gt;The &lt;a href=&quot;https://funds.effectivealtruism.org/funds/far-future&quot;&gt;Long-Term Future Fund&lt;/a&gt; (LTFF) has given very little money to AI policy.&lt;/li&gt;
        &lt;li&gt;The &lt;a href=&quot;https://www.airiskfund.com/&quot;&gt;AI Risk Mitigation Fund&lt;/a&gt; is a spinoff of LTFF that focuses exclusively on AI safety. As of this writing, it hasn’t made any grants yet, but I assume it will behave similarly to LTFF.&lt;/li&gt;
        &lt;li&gt;Longview Philanthropy’s &lt;a href=&quot;https://www.longview.org/fund/emerging-challenges-fund/&quot;&gt;Emerging Challenges Fund&lt;/a&gt; and the &lt;a href=&quot;https://survivalandflourishing.fund/&quot;&gt;Survival and Flourishing Fund&lt;/a&gt; have given some grants on AI policy, but mostly on other cause areas.&lt;/li&gt;
        &lt;li&gt;Manifund has a &lt;a href=&quot;https://manifund.org/about/regranting&quot;&gt;regranting program&lt;/a&gt;, but 3 out of 6 regranters are current or former employees at AI companies which makes me disinclined to trust their judgment; and their grants so far mostly focus on alignment research, not policy.&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://forum.effectivealtruism.org/users/larks&quot;&gt;Larks&lt;/a&gt; used to write reviews of AI safety orgs, but they haven’t done it in a while—and they primarily reviewed alignment research, not policy.&lt;/li&gt;
        &lt;li&gt;Nuño Sempere did some &lt;a href=&quot;https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations&quot;&gt;shallow investigations&lt;/a&gt; three years ago, but they’re out of date.&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:57&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:56&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;As of this writing (2024-11-02), the recommendations are:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Horizon Institute for Public Service&lt;/li&gt;
        &lt;li&gt;Institute for Law and AI&lt;/li&gt;
        &lt;li&gt;Effective Institutions Project’s work on AI governance&lt;/li&gt;
        &lt;li&gt;FAR AI&lt;/li&gt;
        &lt;li&gt;Centre for Long-Term Resilience&lt;/li&gt;
        &lt;li&gt;Center for Security and Emerging Technology&lt;/li&gt;
        &lt;li&gt;Center for Human-Compatible AI&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:56&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:58&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I don’t know exactly what’s going on with the difference between Founders Pledge recs and my top donation candidates. It looks to me like Founders Pledge puts too much stock in the “build influence to use later” theory of change, and it cares too much about orgs’ legible status / reputation. &lt;a href=&quot;#fnref:58&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:29&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some of these orgs don’t exactly work on policy, but do do work that plausibly helps policy. There are some orgs fitting that description that I didn’t review (e.g., AI Impacts and Epoch AI). I had to make judgment calls on reviewing plausibly-relevant orgs vs. saving time, and a different reviewer might have made different calls.&lt;/p&gt;

      &lt;p&gt;In fact, I think I would be more likely to donate to (e.g.) AI Impacts than (e.g.) METR, so why did I write about METR but not AI Impacts? Mainly because I had already put some thought into METR and I figured I might as well write them down, but I haven’t put much thought into AI Impacts. &lt;a href=&quot;#fnref:29&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:49&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some Open Philanthropy employees stand to make money if AI companies do well; and Holden Karnofsky (who no longer works at Open Philanthropy, but used to run it) has &lt;a href=&quot;https://forum.effectivealtruism.org/posts/Pfayu5Bf2apKreueD/a-playbook-for-ai-risk-reduction-focused-on-misaligned-ai&quot;&gt;expressed&lt;/a&gt; that he expects us to avert x-risk by an AI company internally solving alignment. &lt;a href=&quot;#fnref:49&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;It’s nice that AI safety is so much better funded than it used to be, but for my own sake, I kind of miss the days when there were only like five x-risk orgs. &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The names made it difficult for me to edit this post. Many times I would be re-reading a sentence where I referenced some org, and I wouldn’t remember which org it was. “Centre for the Governance of AI? Wait, is that the one that runs polls? Or the one that did SB 1047? Er, no, it’s the one that spun out of the Future of Humanity Institute.”&lt;/p&gt;

      &lt;p&gt;Shout-out to Lightcone, Palisade, and Sentinel for having memorable names. &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:28&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;But maybe that’s a silly thing to worry about. Compare: “I only invest in companies that don’t have managers. Managers’ salaries just take away money from the employees who do the real work.” &lt;a href=&quot;#fnref:28&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:61&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m not convinced that it’s good practice, but at least some people believe it is. &lt;a href=&quot;#fnref:61&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:43&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Also, I’m not sure how much consideration to give this, but I have a vague sense that sharing criticism with the orgs being criticized would hurt my epistemics. Like, maybe if I talk to them, I will become overly predisposed toward politeness and end up deleting accurate criticisms that I should’ve left in. &lt;a href=&quot;#fnref:43&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:73&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The word “funge” is not in the dictionary; I would define it as “causing a &lt;a href=&quot;https://en.wikipedia.org/wiki/Fungibility&quot;&gt;fungible&lt;/a&gt; good [in this case, money] to be used for a different purpose.” That is, causing SFF to give some of its money to a different nonprofit. &lt;a href=&quot;#fnref:73&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;To be honest it’s not that funny given the stakes, but I try to find a little humor where I can. &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:67&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The linked source isn’t why I believe this claim; I believe it based on things I’ve heard in personal communications. &lt;a href=&quot;#fnref:67&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:46&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Examples:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;Andrea brought up the classic argument that AI becomes really dangerous once it’s self-improving. But, he said, it’s not clear what exactly counts as self-improving. It’s already something like self-improving because many ML engineers use LLMs to help them with work tasks. Andrea proposed that the really dangerous time starts once AI is about as competent as remote workers, because that’s when you can massively accelerate the rate of progress. I don’t have a strong opinion on whether that’s true, but it made me think.&lt;/li&gt;
        &lt;li&gt;Andrea said it’s a big problem that we don’t have a “science of intelligence”. We don’t really know what it means for AIs to be smart, all we have is a hodgepodge of benchmarks. We can’t properly evaluate AI capabilities unless we have a much better understanding of what intelligence is.&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:46&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:45&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;To be clear: sometimes, when people say “… but I don’t know if X”, that’s a polite way of saying “I believe not-X.” In this case, that’s not what I mean—what I mean is that I don’t know. &lt;a href=&quot;#fnref:45&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:31&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The report is about defending against cyber-attacks, not about executing cyber-attacks. But it’s also about how to increase AI capabilities. (And an AI that’s smarter about defending cyber-attacks might also be better at executing them.) I can see a good argument that this work is net negative but there’s an argument the other way, too. &lt;a href=&quot;#fnref:31&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:64&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Several sources claim Eliezer’s P(doom) &amp;gt; 95% but their source is a &lt;a href=&quot;https://www.fastcompany.com/90994526/pdoom-explained-how-to-calculate-your-score-on-ai-apocalypse-metric&quot;&gt;news article&lt;/a&gt; and the news article doesn’t cite a source. I could not find any direct quote. &lt;a href=&quot;#fnref:64&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:65&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At least you should have the same goals, if not the same tactics. &lt;a href=&quot;#fnref:65&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:68&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I would not have thought “write Harry Potter fan fiction” was a good strategy, but I turned out to be wrong on that one. &lt;a href=&quot;#fnref:68&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Although to be fair, that’s also why most AI “safety” researchers do capabilities research. &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:66&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For example, various studies have looked at natural experiments where protests do or do not occur based on whether it rains, and they find that protesters’ positions are slightly more popular when it does not rain. The effect shows up repeatedly across multiple studies of different movements. &lt;a href=&quot;#fnref:66&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:72&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I don’t have meaningful insight into whether any particular person would be good at lobbying. I think I can identify that most people would be bad at it, so the best I can do is fail to find any reasons to expect someone to be bad. I don’t see any reasons to expect Felix to be bad, except that he’s junior but that’s a weak reason. &lt;a href=&quot;#fnref:72&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:76&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Outside of the top five, I didn’t think about these classifications very hard. &lt;a href=&quot;#fnref:76&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:32&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For example, I would probably fund &lt;a href=&quot;https://gfi.org/&quot;&gt;Good Food Institute&lt;/a&gt; ahead of most of these. &lt;a href=&quot;#fnref:32&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:69&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;That screens off most of the policy orgs on my list. &lt;a href=&quot;#fnref:69&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:70&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;That screens off Control AI and Existential Risk Observatory. &lt;a href=&quot;#fnref:70&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:71&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;That screens off Center for AI Safety and Future of Life Institute. &lt;a href=&quot;#fnref:71&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:74&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Only MIRI is pursuing the sorts of strategies that MIRI is pursuing, but it has &amp;gt;100x more money than PauseAI. &lt;a href=&quot;#fnref:74&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:60&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;When I made the donation to PauseAI Global, I was under the impression that PauseAI Global and PauseAI US were one organization. That was true at one point, but they had split by the time I made this donation. If I had known that, I would have donated to PauseAI US instead. But I’m not bothered by it because I still think donations to PauseAI Global have high expected value.&lt;/p&gt;

      &lt;p&gt;Also, when I donated the money, I wasn’t planning on writing a whole post. It wasn’t until later that I decided to do a proper investigation and write what you’re currently reading. That’s why I didn’t wait before donating. &lt;a href=&quot;#fnref:60&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:60:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Subjects in Pysch Studies Are More Rational Than Psychologists</title>
				<pubDate>Tue, 15 Oct 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/10/15/subjects_are_more_rational_than_psychologists/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/10/15/subjects_are_more_rational_than_psychologists/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Psychologists have done experiments that supposedly show how people behave irrationally. But in some of those experiments, people &lt;em&gt;do&lt;/em&gt; behave rationally, and it’s the psychologists’ expectations that are irrational.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#ultimatum-game&quot; id=&quot;markdown-toc-ultimatum-game&quot;&gt;Ultimatum game&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#hyperbolic-discounting&quot; id=&quot;markdown-toc-hyperbolic-discounting&quot;&gt;Hyperbolic discounting&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#dunning-kruger-effect&quot; id=&quot;markdown-toc-dunning-kruger-effect&quot;&gt;Dunning-Kruger effect&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#non-examples-asch-and-milgram&quot; id=&quot;markdown-toc-non-examples-asch-and-milgram&quot;&gt;Non-examples: Asch and Milgram&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#themes&quot; id=&quot;markdown-toc-themes&quot;&gt;Themes&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;ultimatum-game&quot;&gt;Ultimatum game&lt;/h2&gt;

&lt;p&gt;The &lt;a href=&quot;https://en.wikipedia.org/wiki/Ultimatum_game&quot;&gt;ultimatum game&lt;/a&gt; works like this:&lt;/p&gt;

&lt;p&gt;There are two participants, call them Alice and Bob. Alice receives $100. Alice then chooses some amount of that money to offer to Bob. Bob can accept the offer and they both walk away with their money, or he can reject the offer in which case they both get nothing.&lt;/p&gt;

&lt;p&gt;According to standard theory, Alice should offer Bob $1 and keep $99 for herself, and Bob should accept the offer because a dollar is better than no dollars. In practice, most “Alices” offer something like a 50/50 split, and most “Bobs” reject the offer if given something like an uneven split. Psychologists&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; are confused about this supposedly irrational behavior.&lt;/p&gt;

&lt;p&gt;But in fact, Alice and Bob both behave rationally. Bob rejects unfair offers, which makes him have less money in some cases. But Alice &lt;em&gt;expects&lt;/em&gt; Bob to reject unfair offers, so she offers Bob a 50/50 split.&lt;/p&gt;

&lt;p&gt;Bob follows a general strategy of rejecting unfair offers, because he believes or intuits that this strategy will ensure he mostly receives fair offers. And Alice knows Bob will probably reject an unfair offer because that’s what most people do. So Bob has (acausally) induced Alice to give him $50 instead of $1. In the world where Bob rejects unfair offers, Bob &lt;a href=&quot;https://www.lesswrong.com/posts/4ARtkT3EYox3THYjF/rationality-is-systematized-winning&quot;&gt;wins&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In other words, most people don’t use &lt;a href=&quot;https://www.lesswrong.com/tag/causal-decision-theory&quot;&gt;causal decision theory&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;hyperbolic-discounting&quot;&gt;Hyperbolic discounting&lt;/h2&gt;

&lt;p&gt;The standard model of rational behavior assumes exponential discounting: future goods decrease in value at a fixed rate (say, 10% per year). But experiments show that many people use &lt;a href=&quot;https://en.wikipedia.org/wiki/Hyperbolic_discounting&quot;&gt;hyperbolic discounting&lt;/a&gt;, which means the value of future goods first falls off rapidly, and then slowly.&lt;/p&gt;

&lt;p&gt;The classic experiment:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Ask people if they would prefer $100 now or $120 a month from now. Most people prefer the $100 now.&lt;/li&gt;
  &lt;li&gt;Ask people if they would prefer $100 in 12 months or $120 in 13 months. Most people prefer the $120 in 13 months.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Psychology textbooks say this is irrational: the monthly discount rate must be either more than 20% (in which case they should take the money sooner in both cases) or less than 20% (in which case they should take the money later). But in real life, people’s behavior makes sense.&lt;/p&gt;

&lt;p&gt;If you say you’re going to give me money right now, you’re probably going to do it. If you say you’re going to give me $120 a month from now, I don’t know what’s going to happen. Maybe you’ll forget, maybe your study will run out of funding, I don’t know. So I’d rather have the money now.&lt;/p&gt;

&lt;p&gt;If you can give me money in 12 months, you can most likely also give me money in 13 months, so I’m not too concerned about waiting an extra month in that case.&lt;/p&gt;

&lt;p&gt;In mathematical terms, the probability that you’ll give me the money decreases hyperbolically with time, not exponentially, so I ought to use hyperbolic discounting.&lt;/p&gt;

&lt;p&gt;(Plus there is another, more technical, reason to use hyperbolic discounting: if the “true” discount function is exponential but you don’t know what exact discount rate to use, then your &lt;em&gt;expected&lt;/em&gt; discount rate starts out high and decreases over time, which produces a hyperbolic discount function.)&lt;/p&gt;

&lt;h2 id=&quot;dunning-kruger-effect&quot;&gt;Dunning-Kruger effect&lt;/h2&gt;

&lt;p&gt;The Dunning-Kruger effect is often misrepresented as showing that ignorant people think they’re knowledgeable, and knowledgeable people think they’re ignorant. To my knowledge, no study has ever showed that.&lt;/p&gt;

&lt;p&gt;The original Kruger &amp;amp; Dunning (1999)&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; paper, and some follow-up studies, produced graphs that looked something like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Dunning-Kruger.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(This graph is an illustration, not based on actual data.)&lt;/p&gt;

&lt;p&gt;Notice two things about this graph:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The “perceived ability” line is a bit too high.&lt;/li&gt;
  &lt;li&gt;The slope is too flat—everyone estimates their ability as closer to average than it really is.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;#1 is a real (albeit small) bias—people systematically overestimate their abilities by a little bit. But the flattened slope of perceived ability is rational, for two reasons.&lt;/p&gt;

&lt;p&gt;Reason 1. If your test has some degree of noise, then you expect the top scorers to have gotten lucky and the bottom scorers to have gotten unlucky. The top scorers aren’t quite as good as the test makes them look, and the bottom scorers aren’t quite as &lt;em&gt;bad&lt;/em&gt; as the test makes them look. So the &lt;em&gt;true&lt;/em&gt; skill curve is flatter than the &lt;em&gt;measured&lt;/em&gt; skill curve. For more on this, see &lt;a href=&quot;https://gwern.net/doc/iq/2020-gignac.pdf&quot;&gt;Gignac &amp;amp; Zajenkowski (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Reason 2. People do not have perfect knowledge of their own abilities. If you know nothing else, you should expect to perform about average. If you know a little bit about your own skill level, you should expect to perform a little below or a little above average. Even if you’re in the top 10% or bottom 10% of skill, it should take a lot of evidence to convince you of that, so it’s perfectly rational for you to estimate your own skill as closer to average than it really is.&lt;/p&gt;

&lt;p&gt;Both these reasons say essentially the same thing: if you have a noisy measure of ability, your belief conditional on that measure should be closer to average than what the measure itself shows. This produces a flattened curve like what we see in the Dunning-Kruger effect, and it’s not a bias but a perfectly rational way of reasoning about imperfect evidence.&lt;/p&gt;

&lt;h2 id=&quot;non-examples-asch-and-milgram&quot;&gt;Non-examples: Asch and Milgram&lt;/h2&gt;

&lt;p&gt;Originally I wanted to include the &lt;a href=&quot;https://en.wikipedia.org/wiki/Asch_conformity_experiments&quot;&gt;Asch conformity experiments&lt;/a&gt; and the &lt;a href=&quot;https://en.wikipedia.org/wiki/Milgram_experiment&quot;&gt;Milgram experiments&lt;/a&gt; on this list—I thought participants behaved rationally. But after doing some more research, I think the rational-participant theory can’t fully explain the observations.&lt;/p&gt;

&lt;p&gt;(If you’re not familiar with the Asch or Milgram experiments, click the links in the previous paragraph for explanations. They’re a bit too complicated to explain in this post.)&lt;/p&gt;

&lt;p&gt;For Asch, I originally wanted to argue that it’s rational to update your beliefs when other people disagree with you. But the experiments found that people are more willing to disagree with the group when they can give their answers in secret. If people were rationally updating on others’ beliefs, then they’d agree with the group in secret as well as in public.&lt;/p&gt;

&lt;p&gt;I still think it’s rational for subjects to update their beliefs toward those of the other participants, but the experiments suggest that people update more than they should.&lt;/p&gt;

&lt;p&gt;For Milgram, I wanted to argue that participants who continue to administer electric shocks have rationally (and correctly!) deduced that nothing bad is happening. This is consistent with people’s behaviors, but it’s not consistent with their stated beliefs (almost all participants report believing that it was real) or their observed emotional state (many of them were visibly sweating/trembling/having nervous fits).&lt;/p&gt;

&lt;h2 id=&quot;themes&quot;&gt;Themes&lt;/h2&gt;

&lt;p&gt;What can we learn from how psychologists mis-interpret these experiments?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;People consider more types of evidence than pychologists think they do.&lt;/li&gt;
  &lt;li&gt;People intuitively use non-causal decision theories.&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Or at least psychology textbooks, it’s possible that many actual psychologists are smarter about this. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kruger, J., &amp;amp; Dunning, D. (1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Gignac, G. E., &amp;amp; Zajenkowski, M. (2020). &lt;a href=&quot;https://gwern.net/doc/iq/2020-gignac.pdf&quot;&gt;The Dunning-Kruger effect is (mostly) a statistical artefact: Valid approaches to testing the hypothesis with individual differences data.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>My Submission for Worst Argument In The World</title>
				<pubDate>Sat, 12 Oct 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/10/12/worst_argument_in_the_world/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/10/12/worst_argument_in_the_world/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Scott Alexander once &lt;a href=&quot;https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world&quot;&gt;wrote&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;David Stove once &lt;a href=&quot;https://web.maths.unsw.edu.au/~jim/worst.html&quot;&gt;ran a contest&lt;/a&gt; to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.&lt;/p&gt;

  &lt;p&gt;If he can unilaterally declare a Worst Argument, then so can I.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If those guys can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this:&lt;/p&gt;

&lt;p&gt;“A long time ago, not-A, and also, not-B. Now, A and B. Therefore, A caused B.”&lt;/p&gt;

&lt;p&gt;Example: In 1820, pirates were everywhere. Now you hardly ever see pirates, and global temperatures are rising. Therefore, the lack of pirates caused global warming.&lt;/p&gt;

&lt;p&gt;(This particular argument was originally made as a &lt;a href=&quot;https://www.spaghettimonster.org/pages/about/open-letter/&quot;&gt;joke&lt;/a&gt;, but I will give some real examples later.)&lt;/p&gt;

&lt;p&gt;Naming fallacies is hard. Maybe we could call this the “two distant points in time fallacy”. For now I’ll just call it the Worst Argument.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;The Worst Argument is a special case of the &lt;a href=&quot;https://en.wikipedia.org/wiki/Post_hoc_ergo_propter_hoc&quot;&gt;post hoc ergo propter hoc&lt;/a&gt; fallacy: “A happened before B, therefore A caused B.”&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; I find this special case to be particularly bad. &lt;em&gt;Post hoc ergo propter hoc&lt;/em&gt; can make a bit of sense sometimes: maybe B happens immediately after A, maybe A and B repeatedly appear together. That doesn’t definitively establish causality, but if you have data showing that B always comes right after A, I want to hear about it.&lt;/p&gt;

&lt;p&gt;The thing about the Worst Argument is that it provides almost no evidence of anything. You can’t just look at two distant points in time, check one independent variable, and assume that the independent variable explains the dependent variable. So many other things could have happened! It’s an incredible leap to say that the rise in global temperatures &lt;em&gt;must&lt;/em&gt; be caused by the decline in pirates when about one zillion other things changed between 1820 and today.&lt;/p&gt;

&lt;p&gt;The Worst Argument is the worst version of &lt;em&gt;post hoc ergo propter hoc&lt;/em&gt; because:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It only looks at two data points.&lt;/li&gt;
  &lt;li&gt;The two data points are far apart in time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you’re going to base your reasoning on &lt;em&gt;post hoc ergo propter hoc&lt;/em&gt;, at least give me a bunch of data points! Or at least make them be close together in time!&lt;/p&gt;

&lt;p&gt;The Worst Argument so obviously bad that surely no one would ever think to invoke it, and if invoked, surely no one would take it seriously, right? And yet people make arguments of this form all the time. And I’ve been persuaded by arguments like this! The fact that this fallacy keeps fooling people (including me) makes it my choice for Worst Argument In The World.&lt;/p&gt;

&lt;p&gt;Let me give some real-world examples. These statements aren’t all wrong, but the arguments are all bad. (It’s possible to make a fallacious argument in support of a true conclusion.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;h2 id=&quot;school-uniforms-prevent-crime&quot;&gt;School uniforms prevent crime&lt;/h2&gt;

&lt;p&gt;When I was in 9th grade, I spent a lot of time on debate.org, a (now-defunct) website that hosted online written debates. I &lt;a href=&quot;https://web.archive.org/web/20100410100917/http://www.debate.org/debates/School-uniforms-ought-to-be-worn-in-primary-and-secondary-schools./1/&quot;&gt;participated in a debate&lt;/a&gt; on the proposition “School uniforms ought to be worn in primary and secondary schools.” I took the negative. My opponent was one of the most renowned debaters on the site, with a record of 75 wins to 7 losses. I was a little nervous to go up against him and I wanted to do my best.&lt;/p&gt;

&lt;p&gt;The debate lasted for three rounds. After the first two rounds, I felt like I was losing, and I spent hours poring over my opponent’s arguments to figure out how to write my final rebuttal. At some point, I had an epiphany: I realized that his central argument was flimsy, but his rhetoric was so good that I hadn’t noticed.&lt;/p&gt;

&lt;p&gt;His argument was essentially this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;In 1993, the Long Beach school system did not require uniforms, and it had high rates of school crime.&lt;/li&gt;
  &lt;li&gt;In 1993, it required school uniforms, and it had dramatically lower rates of crime.&lt;/li&gt;
  &lt;li&gt;Therefore, school uniforms must have caused the reduction in crime.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Nothing else pertinent could have possibly happened in those 6 years, right? They couldn’t have instituted new policies for handling troublesome students, or changed their rules for reporting crimes, or anything like that. It could &lt;em&gt;only&lt;/em&gt; have been the school uniforms.&lt;/p&gt;

&lt;p&gt;In my final rebuttal, I focused on emphasizing the badness of this argument. It was enough to persuade the readers, and I am happy to say that my opponent now has 8 losses.&lt;/p&gt;

&lt;h2 id=&quot;franklin-d-roosevelt-solved-the-great-depression&quot;&gt;Franklin D. Roosevelt solved the Great Depression&lt;/h2&gt;

&lt;p&gt;An argument that I believed for a long time:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The Great Depression started in 1929.&lt;/li&gt;
  &lt;li&gt;FDR was elected in 1931.&lt;/li&gt;
  &lt;li&gt;The Great Depression ended in 1939, while FDR was in office.&lt;/li&gt;
  &lt;li&gt;Therefore, FDR solved the Great Depression.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Plenty of other things happened between 1931 and 1939. An economy can shift quite a lot in 8 years, for many reasons other than who’s president. A typical economic cycle only lasts something like 8 years, and the typical recession only lasts one or two years. After that long you’d expect the economy to look very different, no matter who the president is.&lt;/p&gt;

&lt;p&gt;You might say, hey, aren’t there better arguments about how specifically FDR’s policies could have improved the economy? Yeah, there are, but that’s not why I used to believe FDR had great economic policies. I believed it because I fell for the Worst Argument In The World.&lt;/p&gt;

&lt;p&gt;(Folks on the other side of the political spectrum like to say Reagan solved the economic problems of the 1970s and early 80s, and they make the exact same argument. I never believed that one for some reason…)&lt;/p&gt;

&lt;h2 id=&quot;labor-unions-and-strikes-were-largely-responsible-for-improved-working-conditions-in-the-19th-and-early-20th-centuries&quot;&gt;Labor unions and strikes were largely responsible for improved working conditions in the 19th and early 20th centuries&lt;/h2&gt;

&lt;p&gt;In high school, I read Howard Zinn’s &lt;em&gt;A People’s History of the United States&lt;/em&gt; and it convinced me of the importance of collective action. The book’s argument went like this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Laborers had to work in really bad conditions.&lt;/li&gt;
  &lt;li&gt;They organized unions and held strikes.&lt;/li&gt;
  &lt;li&gt;Decades later, conditions were still bad but not quite as bad.&lt;/li&gt;
  &lt;li&gt;Therefore, strikes and labor unions were the cause of the improvement in conditions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The book repeatedly makes this argument through a series of historical anecdotes.&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; As I recall, it never says the conclusion explicitly, but it’s strongly implied.&lt;/p&gt;

&lt;p&gt;At the time, I found the book’s anecdotes convincing. It wasn’t until years later that I realized how weak its case was.&lt;/p&gt;

&lt;p&gt;In fact, the book’s argument is even weaker than the Worst Argument In The World. It doesn’t say working conditions were bad, then strikes happened, then many years later conditions were good. It says working conditions were bad, then strikes happened, then conditions were still bad. Which is positive evidence that the strikes &lt;em&gt;didn’t&lt;/em&gt; help. Kind of amazing that I read this and came away with the opposite conclusion.&lt;/p&gt;

&lt;h2 id=&quot;seed-oils-cause-obesity&quot;&gt;Seed oils cause obesity&lt;/h2&gt;

&lt;p&gt;There are some arguments about seed oils that don’t rely on this fallacy, but the most popular argument I see goes like this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A century ago, obesity was rare, and also people didn’t eat a lot of seed oils.&lt;/li&gt;
  &lt;li&gt;Today, many more people are obese, and also people eat a lot of seed oils.&lt;/li&gt;
  &lt;li&gt;Therefore, obesity must be caused by seed oils.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I’m almost baffled that anyone finds this convincing, except that I myself was convinced by the Worst Argument In The World in at least two instances as documented above.&lt;/p&gt;

&lt;p&gt;Some people get more sophisticated and they draw a line of obesity going up and a line of something-kinda-like-seed-oil consumption (like vegetable oil consumption) going up and they say, look, the lines both go up! That’s not quite as bad as only looking at two data points, but it’s still pretty bad. (You can also draw a line of pirate populations going down and a line of global temperature going up.) It would be better to go a little further and show that the two lines track each other well. That wouldn’t establish causation but at least it establishes correlation.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;martin-luther-king-greatly-improved-civil-rights-for-african-americans&quot;&gt;Martin Luther King greatly improved civil rights for African-Americans&lt;/h2&gt;

&lt;p&gt;I’m sure there are good arguments for this statement out there somewhere, but I always hear people (implicitly) say things like:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;In the 1950s, segregation was legal and African-Americans didn’t have equal rights.&lt;/li&gt;
  &lt;li&gt;Then, Martin Luther King spearheaded civil rights protests. Also, a bunch of other stuff happened.&lt;/li&gt;
  &lt;li&gt;By the 1970s, America was a much better place for black people.&lt;/li&gt;
  &lt;li&gt;Therefore, the improvements in civil rights must have been largely caused by MLK.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;what-makes-it-the-worst-argument-in-the-world&quot;&gt;What makes it the Worst Argument In The World?&lt;/h2&gt;

&lt;p&gt;Scott Alexander’s &lt;a href=&quot;https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world&quot;&gt;Worst Argument In The World&lt;/a&gt; is a dirty debate tactic—you say something technically true that subtly invokes a hard-to-spot fallacy. My submission for Worst Argument is almost the opposite—you say something fallacious, and it’s pretty obviously fallacious if you think about it, but we fall for it anyway. The Worst Argument In The World reminds me of the &lt;a href=&quot;https://www.youtube.com/watch?v=KB_lTKZm1Ts&quot;&gt;basketball awareness test&lt;/a&gt;: &lt;span class=&quot;spoiler&quot;&gt;Like the moonwalking bear, it somehow sneaks into my brain without getting spotted.&lt;/span&gt;&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Which itself is a special case of the &lt;a href=&quot;https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation&quot;&gt;correlation implies causation&lt;/a&gt; fallacy. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;According to &lt;a href=&quot;https://en.wikipedia.org/wiki/Sturgeon%27s_law&quot;&gt;Sturgeon’s law&lt;/a&gt;, 90% of all arguments are crap, including arguments for true claims. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I didn’t notice this at the time but now that I’m re-reading the debate, I’m reasonably sure the crime numbers are just fraudulent. They claim a 93% reduction in sex offenses. No intervention ever reduces crime by 93%.&lt;/p&gt;

      &lt;p&gt;I’m not sure claiming fraud would have been a good debate tactic, though, and the argument is still terrible even if the numbers are real. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m largely going based off memory, I haven’t read the book since high school. Based on skimming an online summary, it looks like the book made forms of this argument in at least chapters 10, 11, 13, 15, and 19. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I looked on a few seed-oil-hypothesis advocacy sites and I couldn’t find any proper statistics. This is not related to my main point but it was bugging me so I decided to do the statistics myself.&lt;/p&gt;

      &lt;p&gt;To my knowledge, we don’t have proper historical data on seed oil consumption. The best I could do was compare &lt;a href=&quot;https://www.ers.usda.gov/data-products/food-availability-per-capita-data-system/&quot;&gt;salad + cooking oil consumption&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://ourworldindata.org/grapher/share-of-adults-defined-as-obese?tab=chart&amp;amp;country=~USA&quot;&gt;obesity&lt;/a&gt;. Both go up over time, but they don’t track particularly well—for example, oil consumption spiked up around 1999, whereas obesity increased smoothly.&lt;/p&gt;

      &lt;p&gt;Salad + cooking oil consumption correlates with obesity at r = 0.93, but calendar year correlates with obesity at r = 0.994. If calendar year predicts obesity better than your independent variable, then your independent variable probably isn’t very good. (See &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/cooking-oil-obesity.xlsx&quot;&gt;here&lt;/a&gt; for my calculations.)&lt;/p&gt;

      &lt;p&gt;(Exercise for the reader: What is the correlation between obesity and pirate sightings?) &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lee, J. H., Duster, M., Roberts, T., &amp;amp; Devinsky, O. (2022). &lt;a href=&quot;https://doi.org/10.3389%2Ffnut.2021.748847&quot;&gt;United States Dietary Trends Since 1800: Lack of Association Between Saturated Fatty Acid Consumption and Non-communicable Diseases.&lt;/a&gt;. doi: &lt;a href=&quot;https://doi.org/10.3389/fnut.2021.748847&quot;&gt;10.3389/fnut.2021.748847&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Outlive: A Critical Review</title>
				<pubDate>Thu, 26 Sep 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/09/26/outlive_a_critical_review/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/09/26/outlive_a_critical_review/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Last updated 2025-07-04.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://peterattiamd.com/outlive/&quot;&gt;Outlive: The Science &amp;amp; Art of Longevity&lt;/a&gt; by Peter Attia (with Bill Gifford&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;) gives Attia’s prescription on how to live longer and stay healthy into old age. In this post, I critically review some of the book’s scientific claims that stood out to me.&lt;/p&gt;

&lt;p&gt;This is not a comprehensive review. I didn’t review assertions that I was pretty sure were true (ex: &lt;a href=&quot;https://en.wikipedia.org/wiki/VO2_max&quot;&gt;VO2 max&lt;/a&gt; improves longevity), or that were hard for me to evaluate (ex: the mechanics of how LDL cholesterol functions in the body), or that I didn’t care about (ex: sleep deprivation impairs one’s ability to identify facial expressions).&lt;/p&gt;

&lt;p&gt;First, some general notes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I have no expertise on any of the subjects in this post. I evaluated claims by doing shallow readings of relevant scientific literature, especially meta-analyses.&lt;/li&gt;
  &lt;li&gt;There is a spectrum between two ways of being wrong: “pop science book pushes a flashy attention-grabbing thesis with little regard for truth” to “careful truth-seeking author isn’t infallible”. &lt;em&gt;Outlive&lt;/em&gt; makes it 75% of the way to the latter.&lt;/li&gt;
  &lt;li&gt;If I wrote a book that covered this many entirely different scientific fields, I would get a lot more things wrong than &lt;em&gt;Outlive&lt;/em&gt; did. (I probably get a lot of things wrong in this post.)&lt;/li&gt;
  &lt;li&gt;When making my assessments, I give numeric credences and also use terms such as “true” and “likely true”. The numbers give my all-things-considered subjective credences, and the qualitative terms give my interpretation of the strength of the empirical evidence. For example, if the scientific evidence suggests that a claim is 75% likely and I understand the evidence well, then I rate the claim as “likely true”. If I only read the abstract of a single meta-analysis, and the abstract unequivocally supports the claim but I’m only 75% sure that the meta-analysis can be trusted, then I rate it as “true”. Both claims receive a 75% credence.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let’s have a look at some claims from &lt;em&gt;Outlive&lt;/em&gt;, broken down into four categories: disease, exercise, nutrition, and sleep.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#disease&quot; id=&quot;markdown-toc-disease&quot;&gt;Disease&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#people-with-metabolically-healthy-obesity-do-not-have-elevated-mortality-risk&quot; id=&quot;markdown-toc-people-with-metabolically-healthy-obesity-do-not-have-elevated-mortality-risk&quot;&gt;People with metabolically healthy obesity do not have elevated mortality risk&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#amyloid-beta-is-implicated-in-alzheimers-disease&quot; id=&quot;markdown-toc-amyloid-beta-is-implicated-in-alzheimers-disease&quot;&gt;Amyloid beta is implicated in Alzheimer’s disease&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#hdl-cholesterol-on-its-own-doesnt-prevent-heart-disease&quot; id=&quot;markdown-toc-hdl-cholesterol-on-its-own-doesnt-prevent-heart-disease&quot;&gt;HDL cholesterol on its own doesn’t prevent heart disease&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#exercise&quot; id=&quot;markdown-toc-exercise&quot;&gt;Exercise&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#vo2max-is-the-best-predictor-of-longevity&quot; id=&quot;markdown-toc-vo2max-is-the-best-predictor-of-longevity&quot;&gt;VO2max is the best predictor of longevity&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#you-should-train-vo2max-by-doing-hiit-at-the-maximum-sustainable-pace&quot; id=&quot;markdown-toc-you-should-train-vo2max-by-doing-hiit-at-the-maximum-sustainable-pace&quot;&gt;You should train VO2max by doing HIIT at the maximum sustainable pace&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#you-should-do-3-hoursweek-of-zone-2-training-and-one-or-two-sessionsweek-of-hiit&quot; id=&quot;markdown-toc-you-should-do-3-hoursweek-of-zone-2-training-and-one-or-two-sessionsweek-of-hiit&quot;&gt;You should do 3+ hours/week of zone 2 training and one or two sessions/week of HIIT&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#stability-is-as-important-as-cardiovascular-fitness-and-strength&quot; id=&quot;markdown-toc-stability-is-as-important-as-cardiovascular-fitness-and-strength&quot;&gt;Stability is as important as cardiovascular fitness and strength&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#nutrition&quot; id=&quot;markdown-toc-nutrition&quot;&gt;Nutrition&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#rhesus-monkey-studies-suggest-that-calorie-restriction-improves-longevity-but-only-if-you-eat-a-fairly-unhealthy-diet&quot; id=&quot;markdown-toc-rhesus-monkey-studies-suggest-that-calorie-restriction-improves-longevity-but-only-if-you-eat-a-fairly-unhealthy-diet&quot;&gt;Rhesus monkey studies suggest that calorie restriction improves longevity but only if you eat a fairly unhealthy diet&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-data-are-unclear-on-whether-reducing-saturated-fat-intake-is-beneficial&quot; id=&quot;markdown-toc-the-data-are-unclear-on-whether-reducing-saturated-fat-intake-is-beneficial&quot;&gt;The data are unclear on whether reducing saturated fat intake is beneficial&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#people-should-take-omega-3-supplements&quot; id=&quot;markdown-toc-people-should-take-omega-3-supplements&quot;&gt;People should take omega-3 supplements&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#sleep&quot; id=&quot;markdown-toc-sleep&quot;&gt;Sleep&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#every-animal-sleeps&quot; id=&quot;markdown-toc-every-animal-sleeps&quot;&gt;Every animal sleeps&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#we-need-to-sleep-75-to-85-hours-a-night&quot; id=&quot;markdown-toc-we-need-to-sleep-75-to-85-hours-a-night&quot;&gt;We need to sleep 7.5 to 8.5 hours a night&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#basketball-players-who-were-told-to-sleep-for-10-hours-a-night-had-better-shooting-accuracy&quot; id=&quot;markdown-toc-basketball-players-who-were-told-to-sleep-for-10-hours-a-night-had-better-shooting-accuracy&quot;&gt;Basketball players who were told to sleep for 10 hours a night had better shooting accuracy&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#lack-of-sleep-increases-obesity-and-diabetes-risk&quot; id=&quot;markdown-toc-lack-of-sleep-increases-obesity-and-diabetes-risk&quot;&gt;Lack of sleep increases obesity and diabetes risk&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#a-study-using-mendelian-randomization-found-that-sleeping-6-hours-a-night-increased-risk-of-a-heart-attack&quot; id=&quot;markdown-toc-a-study-using-mendelian-randomization-found-that-sleeping-6-hours-a-night-increased-risk-of-a-heart-attack&quot;&gt;A study using Mendelian randomization found that sleeping &amp;lt;6 hours a night increased risk of a heart attack&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#lack-of-sleep-causes-alzheimers-disease&quot; id=&quot;markdown-toc-lack-of-sleep-causes-alzheimers-disease&quot;&gt;Lack of sleep causes Alzheimer’s disease&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#bonus&quot; id=&quot;markdown-toc-bonus&quot;&gt;Bonus&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#dunning-kruger-effect&quot; id=&quot;markdown-toc-dunning-kruger-effect&quot;&gt;Dunning-Kruger effect&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#changelog&quot; id=&quot;markdown-toc-changelog&quot;&gt;Changelog&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;disease&quot;&gt;Disease&lt;/h1&gt;

&lt;h2 id=&quot;people-with-metabolically-healthy-obesity-do-not-have-elevated-mortality-risk&quot;&gt;People with metabolically healthy obesity do not have elevated mortality risk&lt;/h2&gt;

&lt;p&gt;A person is defined as having &lt;a href=&quot;https://en.wikipedia.org/wiki/Metabolic_syndrome&quot;&gt;metabolic syndrome&lt;/a&gt; if they show at least three out of five symptoms:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;abdominal obesity (i.e. large waist circumference)&lt;/li&gt;
  &lt;li&gt;high blood pressure&lt;/li&gt;
  &lt;li&gt;high blood sugar&lt;/li&gt;
  &lt;li&gt;high serum triglycerides&lt;/li&gt;
  &lt;li&gt;low HDL cholesterol&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;People with obesity but no metabolic syndrome are said to have &lt;strong&gt;metabolically healthy obesity&lt;/strong&gt; (MHO).&lt;/p&gt;

&lt;p&gt;Here’s what &lt;em&gt;Outlive&lt;/em&gt; has to say about MHO:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;A large meta-analysis of studies with mean follow-up time of 11.5 years showed that people [with metabolic syndrome] have more than triple the risk of all-cause mortality and/or cardiovascular events than metabolically healthy normal-weight individuals. Meanwhile, the metabolically healthy but obese subjects in these studies were not at significantly increased risk. (page 95)&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The statement “MHO subjects in these studies were not at significantly increased mortality risk” is technically correct: &lt;strong&gt;true&lt;/strong&gt; (credence: 90%&lt;sup id=&quot;fnref:59&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:59&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;).&lt;/li&gt;
  &lt;li&gt;Obesity has no negative health effects for metabolically healthy people: &lt;strong&gt;false&lt;/strong&gt; (credence: 5%).&lt;/li&gt;
  &lt;li&gt;Metabolically healthy obesity does not increase mortality risk: &lt;strong&gt;highly unlikely&lt;/strong&gt; (credence: 10%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Outlive&lt;/em&gt; does not exactly say that MHO carries no elevated health risk, but some readers may come away with that impression, so I want to clarify that obesity is still bad for you even if you’re metabolically healthy.&lt;/p&gt;

&lt;p&gt;The book cites &lt;a href=&quot;https://doi.org/10.1016/j.cmet.2017.07.008&quot;&gt;Stefan et al. (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; which in turn cites &lt;a href=&quot;/materials/kramer2013.pdf&quot;&gt;Kramer et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:39&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:39&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;In a pooled analysis of 8 studies, metabolically healthy obese persons had a similar risk for all-cause mortality or CV [cardiovascular] events compared with the metabolically healthy normal-weight individuals (RR, 1.19; [95%] CI, 0.98 to 1.38). […] However, after we restricted analysis only to studies with at least 10 years of follow-up, the metabolically healthy obese group indeed had increased mortality and CV risk compared with the metabolically healthy normal-weight group (RR, 1.24; CI, 1.02 to 1.55; I^2 = 33.6%)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Stefat et al. (2017) also found that MHO subjects had higher rates of chronic disease than metabolically healthy normal-weight subjects, see &lt;a href=&quot;/assets/images/Stefan-2017-Figure-2.jpg&quot;&gt;Figure 2&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Stefan-2017-Figure-2.jpg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;“MHO does not significantly increase risk” is one way you could describe this evidence. But it’s not the description I’d use:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A &lt;a href=&quot;https://en.wikipedia.org/wiki/Relative_risk&quot;&gt;relative risk&lt;/a&gt; (RR) of 1.19 sounds pretty bad. This finding is (just barely) not statistically significant, but it still has a likelihood ratio of about 5:1 compared to RR = 1&lt;sup id=&quot;fnref:40&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:40&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;—that is, we are 5x more likely to see this result if MHO elevates mortality risk by 1.19x than if it doesn’t elevate risk at all.&lt;/li&gt;
  &lt;li&gt;MHO subjects had worse health across the board—higher rates of fatty liver disease, higher insulin resistance, lower cardiorespiratory fitness, more arterial plaque buildup—while also having non-significantly higher mortality. Which is a more reasonable interpretation: All those health problems somehow don’t translate into increased mortality? Or the health problems &lt;em&gt;do&lt;/em&gt; increase mortality, and the observed RR = 1.19 is real even though it’s not statistically significant? (And the true RR probably isn’t &lt;em&gt;exactly&lt;/em&gt; 1.19, but it’s somewhere in that vicinity.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Many lament how often researchers treat p &amp;lt; 0.05 as “definitely real”, but it bugs me just as much when they treat p &amp;gt; 0.05 as “definitely no effect”.&lt;/p&gt;

&lt;p&gt;Ben Carpenter at Red Pen Reviews &lt;a href=&quot;https://www.redpenreviews.org/reviews/everything-fat-loss/&quot;&gt;writes&lt;/a&gt;:&lt;sup id=&quot;fnref:42&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:42&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Roughly &lt;a href=&quot;https://doi.org/10.1097%2FMD.0000000000008838&quot;&gt;50% of people&lt;/a&gt;&lt;sup id=&quot;fnref:41&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:41&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; with metabolically healthy obesity will develop at least one metabolic abnormality within 3–10 years, which is double the risk of normal-weight individuals. Further, metabolically healthy obesity is still associated with an increased risk of &lt;a href=&quot;https://doi.org/10.1177/2047487315623884&quot;&gt;adverse cardiovascular events&lt;/a&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;, &lt;a href=&quot;https://doi.org/10.1016/j.atherosclerosis.2017.03.035&quot;&gt;subclinical atherosclerosis&lt;/a&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; (plaque buildup within the arterial walls), &lt;a href=&quot;https://doi.org/10.1038/ajg.2016.178&quot;&gt;nonalcoholic fatty liver disease&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;, &lt;a href=&quot;https://doi.org/10.1002/oby.22134&quot;&gt;kidney function decline&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;, and &lt;a href=&quot;https://doi.org/10.1111/obr.12157&quot;&gt;type II diabetes&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;. So, considering this evidence, being metabolically healthy with obesity is probably short-lived and still has health risks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;amyloid-beta-is-implicated-in-alzheimers-disease&quot;&gt;Amyloid beta is implicated in Alzheimer’s disease&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;[&lt;a href=&quot;https://en.wikipedia.org/wiki/Amyloid_beta&quot;&gt;Amyloid beta&lt;/a&gt;] is clearly bad stuff. (page 181)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment: &lt;strong&gt;true&lt;/strong&gt; (credence: 90%).&lt;/p&gt;

&lt;p&gt;This claim stood out in my memory because a great deal of research on amyloid beta recently &lt;a href=&quot;https://www.science.org/content/article/potential-fabrication-research-images-threatens-key-theory-alzheimers-disease&quot;&gt;turned out to be fraudulent&lt;/a&gt;. But upon re-reading the relevant section of &lt;em&gt;Outlive&lt;/em&gt;, I found that none of the book’s claims relied on the fraudulent research, and in fact the book cites the fraud investigation itself:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[Some scientists’] doubts seemed to be validated in July of 2022, when &lt;em&gt;Science&lt;/em&gt; published an article calling into question a widely cited 2006 study that had given new impetus to the amyloid theory, at a time when it had already seemed to be weakening.&lt;sup id=&quot;fnref:60&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:60&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; The 2006 study had pinpointed a particular subtype of amyloid that it claimed directly caused neurodegeneration. That in turn inspired numerous investigations into the subtype. But according to the &lt;em&gt;Science&lt;/em&gt; article, key images in that study had been falsified. (page 183–184)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The claims made in &lt;em&gt;Outlive&lt;/em&gt; about the apparent negative effects of amyloid beta come from studies that predate the 2006 fraud, and those older claims appear to hold up. The book accurately describes the current state of the field as far as I understand it: amyloid beta is associated with at least some cases of Alzheimer’s, but research into amyloid beta has so far failed to uncover any useful treatments.&lt;/p&gt;

&lt;h2 id=&quot;hdl-cholesterol-on-its-own-doesnt-prevent-heart-disease&quot;&gt;HDL cholesterol on its own doesn’t prevent heart disease&lt;/h2&gt;

&lt;p&gt;I wrote this claim in my notes on my first read-through, but upon re-reading, &lt;em&gt;Outlive&lt;/em&gt; never actually said this. I’m including this claim anyway so that readers know I have poor reading comprehension and can’t be trusted.&lt;sup id=&quot;fnref:107&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:107&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Here’s what the book actually said:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Risk does seem to decline as HDL-C rises to around the 80th percentile. But simply raising HDL cholesterol concentrations by brute force, which specialized drugs, has not been shown to reduce cardiovascular risk at all. (page 123)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;HDL cholesterol on its own doesn’t prevent heart disease: &lt;strong&gt;false&lt;/strong&gt;, but &lt;em&gt;Outlive&lt;/em&gt; never said that—in fact, it said the opposite.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;exercise&quot;&gt;Exercise&lt;/h1&gt;

&lt;p&gt;The book (rightly) focuses more on exercise than on nutrition or sleep. &lt;del&gt;From what I can tell, it is the most scientifically accurate section of the book.&lt;/del&gt;&lt;/p&gt;

&lt;p&gt;Update 2025-05-05: I’ve come to learn more about exercise and longevity, and I now believe some of &lt;em&gt;Outlive&lt;/em&gt;’s claims on exercise are incorrect or overstated. The next three sections below explain my updated beliefs. They largely cover the same ground as my recent post,  &lt;a href=&quot;https://mdickens.me/2025/02/03/I_was_probably_wrong_about_HIIT_and_VO2max/&quot;&gt;I was probably wrong about HIIT and VO2max&lt;/a&gt;, but with better evidence and citations (citing meta-analyses instead of tweets).&lt;/p&gt;

&lt;h2 id=&quot;vo2max-is-the-best-predictor-of-longevity&quot;&gt;VO2max is the best predictor of longevity&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/VO2_max&quot;&gt;VO2max&lt;/a&gt; is the maximum amount of oxygen your body is capable of consuming. VO2max is commonly used as a measure of aerobic fitness.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Outlive&lt;/em&gt; never directly says VO2max is the best predictor of longevity, but it sure implies it. The book spends eight pages talking about the association between VO2max and longevity, and about how important it is.&lt;/p&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Increasing VO2max increases longevity: &lt;strong&gt;true&lt;/strong&gt; (credence: 97%).&lt;/li&gt;
  &lt;li&gt;The causal relationship between VO2max and longevity is as strong as &lt;em&gt;Outlive&lt;/em&gt; implies: &lt;strong&gt;false&lt;/strong&gt; (credence: 10%).&lt;/li&gt;
  &lt;li&gt;VO2max is the best proxy for physical fitness: &lt;strong&gt;false&lt;/strong&gt; (credence: 20%) – performance on fitness tests (e.g. maximum pace on an incline treadmill) is easier to measure and probably more accurate.&lt;sup id=&quot;fnref:116&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:116&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have two objections to how &lt;em&gt;Outlive&lt;/em&gt; characterizes VO2max:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The book cites observational studies, not RCTs.&lt;/li&gt;
  &lt;li&gt;VO2max is a proxy for physical fitness, not physical fitness itself. And there are better proxies.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Correlation is not causation. I am confident that training to increase VO2max does, in fact, increase longevity. But I believe observational studies overstate the magnitude of the effect.&lt;/p&gt;

&lt;p&gt;I’ve looked at both observational studies and RCTs on the relationship between exercise and health, and my sense is that observational studies overstate the causal effect by about 2x. (It’s on my to-do list to write a post about why I believe this.) I don’t know about VO2max in particular, but I expect that only about half of its observed association with longevity is causal.&lt;/p&gt;

&lt;p&gt;However, even after cutting the effect in half, exercise is still the best general-purpose&lt;sup id=&quot;fnref:111&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:111&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; intervention for longevity.&lt;/p&gt;

&lt;p&gt;As for my second objection: VO2max is only a proxy for physical fitness. Here I will quote an &lt;a href=&quot;https://thegrowtheq.com/longevity-and-vo2max-does-it-matter/&quot;&gt;article&lt;/a&gt; by elite running coach Steve Magness, because I can’t explain it any better than he did:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;In practical terms, Vo2max is like knowing the size of a car’s engine, which is really important if we want to know about performance. But if we want to know whether that car has a chance to win the Daytona 500, engine size alone won’t tell us. We also need to to know about the size of the fuel tank, about its fuel economy, about how long its tires will hold up, and about all the other small components that translate the power of engine to the speed of the car. It’s the same in humans. Vo2max is one of many components that, taken together, tell us about our holistic aerobic or cardiorespiratory fitness.&lt;/p&gt;

  &lt;p&gt;[…]&lt;/p&gt;

  &lt;p&gt;Vo2max matters. But it’s just one component of many that make up both performance and aerobic fitness. And that’s important because if we return to the original claims that Vo2max is the key indicator of longevity, we’ll find that the majority of the studies cited did NOT even use Vo2max as the main variable. They used performance! In the majority of research, peak speed and incline during the [Vo2max] test was the main correlate to longevity.&lt;/p&gt;

  &lt;p&gt;The large &lt;a href=&quot;https://www.sciencedirect.com/science/article/pii/S0735109722052603#bib36&quot;&gt;study&lt;/a&gt;&lt;sup id=&quot;fnref:112&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:112&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt; on 750,000 veterans that found a 4-fold higher mortality risk for low versus high fitness used peak speed and incline, not Vo2max. Same with the &lt;a href=&quot;https://jamanetwork.com/journals/jamanetworkopen/article-abstract/2707428&quot;&gt;research&lt;/a&gt;&lt;sup id=&quot;fnref:113&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:113&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt; on 120,000 individuals finding a 5x difference in the risk of early death.&lt;/p&gt;

  &lt;p&gt;[…] And as we can see in this &lt;a href=&quot;https://www.sciencedirect.com/science/article/abs/pii/S0033062017300439&quot;&gt;meta-analysis&lt;/a&gt;&lt;sup id=&quot;fnref:114&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:114&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt; looking at mortality and fitness, all but a handful of the included studies used an estimate based on speed or time.&lt;/p&gt;

  &lt;p&gt;You get the point.&lt;/p&gt;

  &lt;p&gt;And this is good news! It means you don’t need to go to a lab and measure your Vo2max. You don’t even need to worry about Vo2max itself (or your watche’s [sic] horrible estimation of it). All you need to do is focus on overall aerobic fitness. Which can easily be measured, compared, and improved in a number of ways that are less expensive and more accessible than Vo2max.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, in short: most studies that measure “VO2max” are actually measuring performance on a fitness test. And this is good news, because it means if your performance is improving—if your mile time or your 5K time is getting faster—then you’re making real progress.&lt;/p&gt;

&lt;h2 id=&quot;you-should-train-vo2max-by-doing-hiit-at-the-maximum-sustainable-pace&quot;&gt;You should train VO2max by doing HIIT at the maximum sustainable pace&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Outlive&lt;/em&gt; recommends doing high-intensity interval training (&lt;a href=&quot;https://en.wikipedia.org/wiki/High-intensity_interval_training&quot;&gt;HIIT&lt;/a&gt;) to improve VO2max:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;The tried-and-true formula for [VO2max training] is to go four minutes at the maximum pace you can sustain for this amount of time—not an all-out sprint, but still a very hard effort. Then ride or jog four minutes easy, which should be enough time for your heart rate to come back down to below about one hundred beats per minute. Repeat this four to six times and cool down. (page 249)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The best exercise routine includes HIIT: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 85%).&lt;/li&gt;
  &lt;li&gt;Longer intervals are better than shorter intervals: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 85%).&lt;/li&gt;
  &lt;li&gt;HIIT should be done at the maximum sustainable pace: &lt;strong&gt;false&lt;/strong&gt; (credence: 25%).&lt;/li&gt;
  &lt;li&gt;HIIT is the best way to improve VO2max: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 70%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Four-minute intervals are reasonable: a meta-analysis by &lt;a href=&quot;/website/materials/wen2019.pdf&quot;&gt;Wen et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:120&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:120&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt; found that ≥2-minute intervals produced bigger performance benefits than &amp;lt;2 minutes (although all interval durations improved VO2max).&lt;/p&gt;

&lt;p&gt;But I have two concerns:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Training at the maximum sustainable pace is a recipe for burnout. You should train at a pace that’s difficult, but not maximum effort.&lt;/li&gt;
  &lt;li&gt;There is mixed evidence on whether HIIT is the best way to improve VO2max. Low-intensity training might be just as good.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For the first concern:&lt;/p&gt;

&lt;p&gt;Taken literally, the quoted prescription is impossible to follow. If you do your first interval at the maximum sustainable pace, then your second interval cannot possibly be done at the same pace because you will be fatigued from the first interval.&lt;/p&gt;

&lt;p&gt;Perhaps Attia meant to say that you should go at the maximum pace you can sustain &lt;em&gt;across all intervals&lt;/em&gt;. That’s physically possible, but it still seems like a bad idea.&lt;/p&gt;

&lt;p&gt;I wanted to cite RCT evidence comparing HIIT at different intensities while controlling for duration (e.g., 4-minute intervals at maximum pace vs. 4-minute intervals at 90% of maximum pace). But I couldn’t find any. Instead, the evidence on this question comes from studies of how athletes train.&lt;/p&gt;

&lt;p&gt;Elite athletes typically do HIIT at roughly 90% of VO2max (&lt;a href=&quot;/materials/seiler2010&quot;&gt;Seiler (2010)&lt;/a&gt;&lt;sup id=&quot;fnref:121&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:121&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;). This is &lt;em&gt;considerably slower&lt;/em&gt; than the maximum sustainable pace (&lt;a href=&quot;/materials/billat2001.pdf&quot;&gt;Billat (2001)&lt;/a&gt;&lt;sup id=&quot;fnref:126&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:126&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;). For example, &lt;a href=&quot;https://doi.org/10.3109/13813459508996126&quot;&gt;Billat et al. (1995)&lt;/a&gt;&lt;sup id=&quot;fnref:122&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:122&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt; found that a sample of elite long-distance runners could maintain 90% of VO2max for an average of 16.55 minutes.&lt;sup id=&quot;fnref:124&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:124&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt; If you can maintain a certain pace for 16 minutes straight, then you can certainly run at that pace for four 4-minute intervals with rests in between.&lt;/p&gt;

&lt;p&gt;For the second concern: is HIIT really better than low-intensity training (LIT)&lt;sup id=&quot;fnref:130&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:130&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;26&lt;/a&gt;&lt;/sup&gt; for improving VO2max? Probably yes, but it’s unclear.&lt;/p&gt;

&lt;p&gt;The RCT evidence on this is mixed. A meta-analysis by &lt;a href=&quot;/website/materials/wen2019.pdf&quot;&gt;Wen et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:120:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:120&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt; found that HIIT worked better than LIT for increasing VO2max. However, a review of meta-analyses by &lt;a href=&quot;https://doi.org/10.1155/2022/9310710&quot;&gt;Crowley et al. (2022)&lt;/a&gt;&lt;sup id=&quot;fnref:127&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:127&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt; found inconsistent evidence.&lt;sup id=&quot;fnref:128&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:128&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;28&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;I didn’t look at most of the meta-analyses reviewed by Crowley et al. I did look at &lt;a href=&quot;https://doi.org/10.1007/s40279-013-0115-0&quot;&gt;Gist et al. (2013)&lt;/a&gt;, which at a glance appears to be the most rigorous “contrarian” meta-analysis. It found that sprint interval training did not increase VO2max by more than endurance training. However, this is roughly consistent with &lt;a href=&quot;/website/materials/wen2019.pdf&quot;&gt;Wen et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:120:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:120&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt;, which found that longer intervals (2+ minutes) worked better than sprint intervals.&lt;/p&gt;

&lt;p&gt;RCTs may overstate the benefits of HIIT relative to LIT. A meta-analysis by &lt;a href=&quot;https://doi.org/10.1007/s40279-024-02120-2&quot;&gt;Mølmen et al. (2024)&lt;/a&gt;&lt;sup id=&quot;fnref:118&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:118&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;29&lt;/a&gt;&lt;/sup&gt; found that HIIT rapidly improved participants’ fitness, whereas the benefits of LIT took longer to show up. Therefore, a 5-week study will show a clear advantage to HIIT even if LIT might be better in the long run:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/Molmen-2024.jpeg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(ET = endurance training; HIT = high-intensity training; SIT = sprint-intensity training)&lt;/p&gt;

&lt;p&gt;That being said, there is &lt;em&gt;some&lt;/em&gt; evidence that HIIT provides benefits on top of LIT, so if your goal is to optimize longevity, then I believe it makes sense to do both.&lt;/p&gt;

&lt;h2 id=&quot;you-should-do-3-hoursweek-of-zone-2-training-and-one-or-two-sessionsweek-of-hiit&quot;&gt;You should do 3+ hours/week of zone 2 training and one or two sessions/week of HIIT&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;[I]t seems that about three hours per week of zone 2, or four 45-minute sessions, is the minimum required for most people to derive a benefit and make improvements, once you get over the initial hump of trying it for the first time. (page 243)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;Even if we are not out to set world records, the way we train VO2max is pretty similar to the way elite athletes do it: by supplementing our zone 2 work with one or two VO2max workouts per week. (page 249)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;This is a good exercise routine: &lt;strong&gt;true&lt;/strong&gt; (credence: 95%).&lt;/li&gt;
  &lt;li&gt;This routine is uniquely better than any other: &lt;strong&gt;likely false&lt;/strong&gt; (credence: 25%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;RCT evidence doesn’t tell us much about the optimal exercise routine. As with HIIT vs. LIT, the best evidence comes from looking at how top athletes train.&lt;/p&gt;

&lt;p&gt;Elite athletes typically do 80% of their training at low/moderate intensity (a.k.a. &lt;a href=&quot;https://trainright.com/zone-2-training-to-improve-aerobic-endurance-and-fat-burning/&quot;&gt;zone 2&lt;/a&gt;), and 20% at high intensity (&lt;a href=&quot;/materials/seiler2010&quot;&gt;Seiler (2010)&lt;/a&gt;&lt;sup id=&quot;fnref:121:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:121&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://doi.org/10.3389/fphys.2015.00295&quot;&gt;Stöggl &amp;amp; Sperlich (2015)&lt;/a&gt;&lt;sup id=&quot;fnref:123&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:123&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;30&lt;/a&gt;&lt;/sup&gt;). For elite athletes, doing more than 20% of cardio sessions at high intensity can induce overtraining (&lt;a href=&quot;/materials/seiler2010&quot;&gt;Seiler (2010)&lt;/a&gt;&lt;sup id=&quot;fnref:121:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:121&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Outlive&lt;/em&gt; recommends training at just two intensities: low and high. But many elite athletes train at three or more different durations/intensities (&lt;a href=&quot;https://doi.org/10.3389/fphys.2015.00295&quot;&gt;Stöggl &amp;amp; Sperlich (2015)&lt;/a&gt;&lt;sup id=&quot;fnref:123:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:123&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;30&lt;/a&gt;&lt;/sup&gt;). Different training modalities cause your body to adapt in different ways, so it makes sense to mix things up. You might do something like 80% zone 2 training, 15% long-duration (~4-minute) interval training, 5% sprint (&amp;lt; 1 minute) interval training. But just two intensities is probably sufficient to get the vast majority of longevity benefits.&lt;/p&gt;

&lt;p&gt;I am not confident that ordinary folks should follow the same 80/20 rule as elite athletes. Professionals train 20–30 hours per week, including 4+ hours of high-intensity training. If I do 5 hours a week of cardio instead of 20, do I really need to restrict my HIIT to only one hour? Maybe I could handle 2 or 3 hours (or even 5 hours) without overtraining. But in the absence of direct evidence, I’ll follow the 80/20 rule.&lt;/p&gt;

&lt;p&gt;For the record, my exercise routine is:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Low-intensity cardio 3x/week for 45 to 90 minutes (depending on how I’m feeling);&lt;/li&gt;
  &lt;li&gt;HIIT once every 1–2 weeks (usually four 4-minute intervals, and occasionally a different variation);&lt;/li&gt;
  &lt;li&gt;Resistance training 3-4x/week.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I think this is less than the optimal amount of HIIT, but I don’t like doing HIIT.&lt;/p&gt;

&lt;h2 id=&quot;stability-is-as-important-as-cardiovascular-fitness-and-strength&quot;&gt;Stability is as important as cardiovascular fitness and strength&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Outlive&lt;/em&gt; claims that exercise is the best way to increase longevity, and that the ideal exercise programs includes four components:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;long-duration, low-intensity cardio (“zone 2” training)&lt;/li&gt;
  &lt;li&gt;high-intensity interval training (VO2 max training)&lt;/li&gt;
  &lt;li&gt;strength training&lt;/li&gt;
  &lt;li&gt;stability training&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An abundance of evidence shows that the first three types of exercise are excellent for health and longevity. The fourth is not so well-supported.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[Stability] is the foundation on which our twin pillars of cardiovascular fitness and strength must rest. (page 265)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment: I don’t know how to evaluate this. It seems to border on &lt;a href=&quot;https://en.wikipedia.org/wiki/Not_even_wrong&quot;&gt;not even wrong&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The book defines stability as “the subconscious ability to harness, decelerate, or stop force”, while admitting that this isn’t a great definition. I don’t know how to empirically test stability by this definition. You can formally define cardiovascular fitness as VO2 max (or resting heart rate, etc.), and then show that improving your chosen metric improves health and reduces mortality risk. But I don’t know how to operationalize “stability”.&lt;/p&gt;

&lt;p&gt;(On one reading, this definition of stability sounds nearly identical to “strength”, because your ability to decelerate/stop force is pretty much entirely determined by your ability to generate force.)&lt;/p&gt;

&lt;p&gt;The chapter on stability only made two-ish testable claims that I could identify. The first:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;DNS [dynamic neuromuscular stabilization] originated with a group of Czech neurologists who were working with young children with cerebral palsy in a hospital in Prague in the 1960s. They noticed that because of their illness, these kids did &lt;em&gt;not&lt;/em&gt; go through the normal infant stages of rolling, crawling, and so forth. Thus they had movement problems throughout their lives. But when the children with cerebral palsy were put through a “training” program consisting of a certain sequence of movements, replicating the usual stages of learning to crawl, sit up, and eventually stand, their symptoms improved and they were better able to control their motions as they matured. The researchers realized that as we grow up, most healthy humans actually go through an opposite process—we lose these natural, healthy, almost ingrained movements. (page 270)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(The book cites &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3578435/&quot;&gt;Frank et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;31&lt;/a&gt;&lt;/sup&gt;.)&lt;/p&gt;

&lt;p&gt;TLDR: Some research found that DNS helped children with cerebral palsy, so it might also help adults prevent injuries. The assertion that it would help adults is presented without evidence, and to my knowledge, no evidence exists.&lt;/p&gt;

&lt;p&gt;Relatedly, &lt;em&gt;Outlive&lt;/em&gt; cites an &lt;a href=&quot;https://peterattiamd.com/michaelrintala/&quot;&gt;interview&lt;/a&gt; Peter Attia did with “leading American practitioner of DNS” Michael Rintala, D.C. (a.k.a. Doctor of Chiropractic). &lt;a href=&quot;https://en.wikipedia.org/wiki/Chiropractic&quot;&gt;Chiropractic&lt;/a&gt; is pseudoscience and chiropractors are fake doctors, so without having looked into this much, I’m pretty skeptical of DNS. If Rintala practices bogus medicine in one arena, that’s evidence that his DNS research is also bogus.&lt;/p&gt;

&lt;p&gt;I don’t know that this first claim is false, but it’s essentially pulled out of thin air so I have no reason to believe that it’s true.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Outlive&lt;/em&gt; makes a second semi-testable claim: it implies (but does not explicitly state) that squatting with perfect form has lower injury risk than squatting asymmetrically. This claim sounds intuitively plausible but I could not find any supporting evidence (and there’s some evidence that “bad” squat form isn’t necessarily bad&lt;sup id=&quot;fnref:33&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:33&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;32&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;And squatting asymmetrically is probably better than not squatting at all (credence: 90%) because (1) studies find robust health benefits to strength training and (2) probably most of the subjects in those studies don’t have particularly good form.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Outlive&lt;/em&gt; didn’t do a good job of defining “stability training”, so I’ll do it myself. Let’s say the purpose of stability training is to reduce the risk of falling. In that case, is stability training useful?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A meta-analysis by &lt;a href=&quot;https://doi.org/10.1071/NB10056&quot;&gt;Sherrington et al. (2011)&lt;/a&gt;&lt;sup id=&quot;fnref:88&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:88&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;33&lt;/a&gt;&lt;/sup&gt; found that essentially any kind of exercise reduced fall risk, with balance training having a 22% larger effect than an “average” exercise program, and long-duration exercise (&amp;gt;50 hour trial duration) having a 23% larger effect than “average”. So “stability training” does appear to work, and high-dose exercise programs that included balance training reduced fall risk by 38%.&lt;/li&gt;
  &lt;li&gt;A meta-analysis by &lt;a href=&quot;https://doi.org/10.1001/jamainternmed.2018.5406&quot;&gt;de Souto Barreto et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:89&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:89&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;34&lt;/a&gt;&lt;/sup&gt; found that exercise significantly reduced falls (RR = 0.88) and injurious falls (RR = 0.74), and non-significantly reduced fractures (RR = 0.84, 95% CI 0.71–1.00). It compared exercise programs by type: (1) aerobic, (2) strength, (3) other (tai chi/dance), (4) multicomponent (aerobic + strength + balance). The meta-analysis did not find any significant differences between four types. (And it wasn’t a p = 0.06 situation either—most of the p-values were greater than 0.8.)&lt;/li&gt;
  &lt;li&gt;There’s some contrary evidence on balance training. A meta-analysis by &lt;a href=&quot;https://link.springer.com/article/10.1007/s40279-016-0515-z&quot;&gt;Kümmel et al. (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:110&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:110&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;35&lt;/a&gt;&lt;/sup&gt; found that balance training on a particular task improves performance on that task, but does not transfer even to very similar tasks. This gives reason to doubt that balance training reduces injury risk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So exercise appears to help with stability. And balance exercises might work better than other types of exercise, but they might not work at all.&lt;/p&gt;

&lt;p&gt;My assessment of some claims that &lt;em&gt;Outlive&lt;/em&gt; didn’t make, but sort of implied:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Stability is useful: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 90%).&lt;/li&gt;
  &lt;li&gt;Exercise can improve stability: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 80%).&lt;/li&gt;
  &lt;li&gt;Exercising to improve stability matters as much as exercising to reduce cardiovascular disease/diabetes/cancer: &lt;strong&gt;almost certainly false&lt;/strong&gt; (credence: 5%).&lt;/li&gt;
  &lt;li&gt;Most people should do additional stability training on top of the cardio and strength training that they should already be doing: &lt;strong&gt;likely false&lt;/strong&gt; (credence: 15%).&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;nutrition&quot;&gt;Nutrition&lt;/h1&gt;

&lt;h2 id=&quot;rhesus-monkey-studies-suggest-that-calorie-restriction-improves-longevity-but-only-if-you-eat-a-fairly-unhealthy-diet&quot;&gt;Rhesus monkey studies suggest that calorie restriction improves longevity but only if you eat a fairly unhealthy diet&lt;/h2&gt;

&lt;p&gt;I won’t provide a direct quote from &lt;em&gt;Outlive&lt;/em&gt; because it would be too long. To summarize, the book says (pages 312–316):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A 2009 University of Wisconsin-Madison (UW) &lt;a href=&quot;https://doi.org/10.1126/science.1173635&quot;&gt;study&lt;/a&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;36&lt;/a&gt;&lt;/sup&gt; found that rhesus monkeys on a calorie-restricted diet lived longer than the control group.&lt;/li&gt;
  &lt;li&gt;But a similar 2012 &lt;a href=&quot;(https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3832985/)&quot;&gt;study&lt;/a&gt;&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;37&lt;/a&gt;&lt;/sup&gt; by the National Institute of Aging (NIA) found that a calorie-restricted diet did &lt;em&gt;not&lt;/em&gt; improve longevity.&lt;/li&gt;
  &lt;li&gt;The biggest difference between the studies was that the UW monkeys ate processed food and the NIA monkeys ate a whole-foods diet formulated by a primate nutritionist.&lt;/li&gt;
  &lt;li&gt;So it looks like calorie restriction improves longevity if you eat mostly processed food, and doesn’t matter much if you eat a healthy diet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;em&gt;Outlive&lt;/em&gt; accurately summarizes these two studies: &lt;strong&gt;true&lt;/strong&gt; (credence: 90%).&lt;/li&gt;
  &lt;li&gt;Calorie restriction (CR) improves longevity but only if you eat a fairly unhealthy diet: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 70%).&lt;/li&gt;
  &lt;li&gt;The rhesus monkey studies support the above claim: &lt;strong&gt;unclear&lt;/strong&gt; (credence: 50%).&lt;/li&gt;
  &lt;li&gt;The rhesus monkey studies generalize well to humans: &lt;strong&gt;somewhat unlikely&lt;/strong&gt; (credence: 35%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I found the book’s interpretation to be reasonable and appropriately couched in uncertainty but I want to write about the studies because they were interesting.&lt;/p&gt;

&lt;p&gt;A 2017 &lt;a href=&quot;https://doi.org/10.1038/ncomms14063&quot;&gt;collaboration&lt;/a&gt;&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;38&lt;/a&gt;&lt;/sup&gt; by the authors of the UW and NIA studies reviewed the differences in study designs and outcomes. They agreed with Attia’s interpretation that diet quality was the most likely explanation for the different results, and that the studies jointly suggest that calorie intake is an important predictor of longevity—the monkeys in the NIA control group ate as little as those in the UW calorie restriction (CR) group.&lt;/p&gt;

&lt;p&gt;I read the collaboration and noticed some results that don’t add up:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;In the NIA study, the control group developed more chronic disease than the CR group (&lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5247583/figure/f6/&quot;&gt;Figure 6&lt;/a&gt;), which seemingly contradicts the finding that calorie restriction didn’t help.&lt;/li&gt;
  &lt;li&gt;NIA split monkeys into young and old cohorts based on the age of each monkey when the study started. Within the young cohort, the CR group had less chronic disease (&lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5247583/figure/f6/&quot;&gt;Figure 6&lt;/a&gt;), but had &lt;em&gt;worse&lt;/em&gt; mean and median longevity (&lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5247583/table/t3/?report=objectonly&quot;&gt;Table 2&lt;/a&gt;). Why do these two measurements point in opposite directions?
    &lt;ul&gt;
      &lt;li&gt;CR &lt;em&gt;reduced&lt;/em&gt; lifespan in the young NIA cohort, and the magnitude of the effect was &lt;em&gt;larger&lt;/em&gt; than in the UW study, but it wasn’t statistically significant&lt;sup id=&quot;fnref:49&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:49&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;39&lt;/a&gt;&lt;/sup&gt; so of course the authors ignored it. I’m not saying it’s a real effect, I’m just saying if you get a result (even a non-significant result) that goes in the &lt;em&gt;opposite&lt;/em&gt; direction of what you predicted, then you should take that as a cue that you’re missing something.&lt;sup id=&quot;fnref:48&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:48&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;40&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;The NIA young male cohort saw a decrease in &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5247583/figure/f2/&quot;&gt;bodyweight&lt;/a&gt; and &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5247583/figure/f3/&quot;&gt;body fat&lt;/a&gt; while the other three NIA cohorts saw essentially no change. Presumably, the main mechanism of calorie restriction is that it prevents obesity, so we should see a longevity improvement among NIA young males and not among the other three cohorts. But that’s not what we see. Instead we see essentially no effect in old males/females and a negative effect in young males/females.&lt;/li&gt;
  &lt;li&gt;In the NIA study, why did calorie restriction reduce average bodyweight for males but &lt;em&gt;not&lt;/em&gt; for females (&lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5247583/figure/f2/&quot;&gt;Figure 2&lt;/a&gt;)? The authors took this as evidence of “sexual dimorphism in the relationship between food intake and bodyweight”. That does not sound plausible to me.&lt;/li&gt;
  &lt;li&gt;According to the authors: “In rodents, early onset CR is more effective in extending longevity than adult onset CR. For nonhuman primates it appears that CR, while beneficial when implemented in adulthood, does not improve survival when implemented in juveniles.” It’s suspicious that rodent studies and monkey studies produced opposite results in this respect.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The NIA control group monkeys ate about as much as the UW calorie-restricted monkeys, which the authors take as evidence in favor of the original hypothesis—lower caloric intake improves longevity. Maybe.&lt;sup id=&quot;fnref:51&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:51&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;41&lt;/a&gt;&lt;/sup&gt; Or maybe it was the higher-quality diet (irrespective of caloric intake), or some other difference between the two studies.&lt;sup id=&quot;fnref:50&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:50&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;42&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;If two studies conflict, I’m wary of making my hypothesis more complicated to fit the results. The hypothesis started as “CR improves longevity”, which the UW study supported. But when the NIA study produced contradictory evidence, the hypothesis became “CR improves longevity, unless you’re eating a healthy diet, in which case it doesn’t”.&lt;/p&gt;

&lt;p&gt;The two studies together support this hypothesis, but they don’t distinguish it from other plausible hypotheses (see previous footnote, repeated here&lt;sup id=&quot;fnref:50:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:50&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;42&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;And if you look closely at the NIA results, the hypothesis needs to become even more complicated: something like “CR improves longevity, but if you’re eating a healthy diet then it has no effect for old monkeys and a harmful effect for young monkeys, and the mechanism is presumably that monkeys with healthy diets tend to have lower rates of obesity, except that young males in the NIA study didn’t weigh much less than in UW so for young males the mechanism is some other thing, and also the young vs. old effect is reversed in rodents for some reason.”&lt;/p&gt;

&lt;p&gt;The more complicated a hypothesis, the more supporting evidence it requires.&lt;/p&gt;

&lt;p&gt;More broadly, I’m skeptical that CR studies on lab animals generalize to the real world. Peter Attia agrees:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;CR’s usefulness remains doubtful outside the lab; very lean animals may be more susceptible to death from infection or cold temperatures. […] Furthermore, there is no evidence that extreme CR would truly maximize the longevity function in an organism as complex as we humans, who live in a more variable environment than the animals [studied]. While it seems likely that it would reduce the risk of succumbing to at least some [chronic diseases], it seems equally likely that the uptick in mortality due to infections, trauma, and frailty might offset these gains. (page 81)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another fact that seems important to me, but that Attia and the UW/NIA authors didn’t discuss: the monkeys never exercised. They were permanently housed in small cages (&lt;a href=&quot;https://doi.org/10.1016/j.neurobiolaging.2004.09.013&quot;&gt;Mattison et al. (2005)&lt;/a&gt;&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;43&lt;/a&gt;&lt;/sup&gt;). Some of the the UW monkeys did participate in a different &lt;a href=&quot;https://doi.org/10.1016/j.exger.2013.08.002&quot;&gt;study&lt;/a&gt;&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;44&lt;/a&gt;&lt;/sup&gt; measuring their physical activity which moved them to a metabolic chamber, but the metabolic chamber was about the same size as the cages—i.e., too small for meaningful exercise.&lt;/p&gt;

&lt;p&gt;Physical activity appears to largely or fully cancel out the harms of higher-calorie diets.&lt;sup id=&quot;fnref:35&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:35&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;45&lt;/a&gt;&lt;/sup&gt; Even if calorie restriction works for sedentary people, it’s less likely to improve health for folks who exercise regularly.&lt;/p&gt;

&lt;p&gt;Returning to the original hypothesis—”CR improves longevity but only if you eat a fairly unhealthy diet”—these studies provide a small amount of evidence for the hypothesis but not much. That said, the hypothesis sounds correct to me:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;People with unhealthy diets tend to overeat, so eating less would probably improve their health.&lt;/li&gt;
  &lt;li&gt;People who get most of their calories from healthy sources (whole grains, nuts, etc.) are much less likely to overeat, so there’s no point in calorie restriction.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-data-are-unclear-on-whether-reducing-saturated-fat-intake-is-beneficial&quot;&gt;The data are unclear on whether reducing saturated fat intake is beneficial&lt;/h2&gt;

&lt;p&gt;(Note: SFA = saturated fatty acid, MUFA = monounsaturated fatty acid, PUFA = polyunsaturated fatty acid)&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;A more recent publication by the Cochrane Collaboration, published in 2020 as a 287-page treatise titled &lt;em&gt;Reduction in Saturated Fat Intake for Cardiovascular Disease&lt;/em&gt;&lt;sup id=&quot;fnref:62&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:62&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;46&lt;/a&gt;&lt;/sup&gt;, looked at fifteen RCTs in over fifty-six thousand patients and found, among other things, that “reducing dietary saturated fat reduced the risk of combined cardiovascular events by 17%.” Interesting. But the same review also found “little or no effect of reducing saturated fat on all-cause mortality or cardiovascular mortality.”&lt;/p&gt;

  &lt;p&gt;[…]&lt;/p&gt;

  &lt;p&gt;The data are very unclear on this question, at least at the population level. […] [A]ny hope of using broad insights from evidence-based medicine is bound to fail when it comes to nutrition, because such population-level data cannot provide much value at the individual level when the effect sizes are so small, as they clearly are here. (pages 337–338)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And a bonus quote:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;If, after reading this chapter, you’re upset because you don’t quite agree with some detail I’ve covered—be it the ratio of MUFA to PUFA to SFA, or the exact bioavailability of soy protein, the role of seed oils and lectins, or the ideal target for average blood glucose levels […], I have one final piece of advice. Stop overthinking nutrition so much. Put the book down. Go outside and exercise. (page 346)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Well joke’s on you, I already exercised today, and now I’m back to over-analyze saturated fat.&lt;/p&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Saturated fat is unhealthy in expectation: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 85%).&lt;/li&gt;
  &lt;li&gt;It’s a good idea for most people to reduce their SFA intake: &lt;strong&gt;possible&lt;/strong&gt; (credence: 50%).&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;47&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;It’s a good idea for people with high cholesterol to reduce their SFA intake: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 70%).&lt;/li&gt;
  &lt;li&gt;The data are unclear: &lt;strong&gt;unclear&lt;/strong&gt;. (Yes, it’s unclear whether the data are unclear. It depends on how much clarity you want.)&lt;/li&gt;
  &lt;li&gt;The amount of SFA in your diet doesn’t matter all that much: &lt;strong&gt;possible&lt;/strong&gt; (credence: 50%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Attia’s position on saturated fat stuck out to me because the mainstream view says saturated fat is unhealthy. After spending much longer on this than I’d originally planned, I’ve come to the conclusion that the mainstream advice is basically reasonable, and Attia’s position is also basically reasonable. There’s some evidence that reducing SFA is beneficial and there’s little evidence to the contrary, but (a) the evidence is only moderately strong at best, and (b) there’s a lot of variation in how SFA affects people, so you might not need to worry about it unless you have high cholesterol.&lt;/p&gt;

&lt;p&gt;Observational studies have found mixed results, with the more reliable studies generally finding moderate associations between saturated fat and heart disease. For example, &lt;a href=&quot;https://doi.org/10.1136/bmj.i5796&quot;&gt;Zong et al. (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:64&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:64&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;48&lt;/a&gt;&lt;/sup&gt; found that shifting 1% of daily calories from saturated fat to polyunsaturated fat was associated with an 8% reduction in coronary heart disease (p &amp;lt; 0.001). But observational studies don’t establish causality, so let’s look at randomized controlled trials (RCTs).&lt;/p&gt;

&lt;p&gt;A 2020 &lt;a href=&quot;https://doi.org/10.1002/14651858.cd011737.pub3&quot;&gt;Cochrane review&lt;/a&gt;&lt;sup id=&quot;fnref:62:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:62&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;46&lt;/a&gt;&lt;/sup&gt; of RCTs found that replacing dietary saturated fat significantly reduced cardiovascular events and LDL cholesterol levels, and non-significantly reduced all-cause mortality and cardiovascular mortality. (Some other meta-analyses of RCTs have been done,&lt;sup id=&quot;fnref:65&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:65&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;49&lt;/a&gt;&lt;/sup&gt; but the Cochrane review likely has the strongest methodology.)&lt;/p&gt;

&lt;p&gt;It’s not entirely clear how to interpret the results of the Cochrane review. Out of eight &lt;a href=&quot;https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD011737.pub3/full#CD011737-sec-0008&quot;&gt;primary outcome variables&lt;/a&gt; (including all-cause mortality, cardiovascular mortality, etc.), reducing SFA only statistically significantly improved one variable. But it showed a positive effect for all eight. This weakly suggests that there’s a real effect and the RCTs were underpowered for most of the measures. If we treated the eight measures as independent, this would constitute strong evidence of a real effect, but the measures are mostly correlated with each other.&lt;/p&gt;

&lt;p&gt;The Cochrane review looked at dozens of other outcome variables. Most importantly, replacing saturated fat significantly reduced LDL cholesterol. (A &lt;a href=&quot;https://iris.who.int/bitstream/handle/10665/246104/9789241565349-eng.pdf&quot;&gt;WHO (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:68&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:68&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;50&lt;/a&gt;&lt;/sup&gt; meta-analysis of 84 RCTs agreed with this result.) Cholesterol-lowering drugs have been shown to lower all-cause mortality (see &lt;a href=&quot;https://doi.org/10.1001/jama.2018.2525&quot;&gt;Navarese et al. (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:67&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:67&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;51&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://doi.org/10.1097/fjc.0000000000001345&quot;&gt;Ennezat et al. (2022)&lt;/a&gt;&lt;sup id=&quot;fnref:66&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:66&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;52&lt;/a&gt;&lt;/sup&gt;), so it stands to reason—although we only have weak direct evidence—that if reducing SFA improves LDL cholesterol, then it should improve all-cause mortality. Somewhat contradicting this, an RCT by &lt;a href=&quot;https://doi.org/10.1093/ajcn/nqz035&quot;&gt;Bergeron et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:71&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:71&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;53&lt;/a&gt;&lt;/sup&gt; found that SFA caused the body to produce mainly larger LDL particles, which are less harmful than small particles.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.ahajournals.org/doi/full/10.1161/CIR.0000000000000510&quot;&gt;Sacks et al. (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:90&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:90&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;54&lt;/a&gt;&lt;/sup&gt; performed a review with stricter inclusion criteria, looking only at RCTs that:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;controlled subjects’ dietary intake;&lt;/li&gt;
  &lt;li&gt;lasted at least 2 years;&lt;/li&gt;
  &lt;li&gt;proved adherence by measuring biomarkers like cholesterol;&lt;/li&gt;
  &lt;li&gt;did not replace saturated fats with trans fats in the intervention group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only four trials met these criteria. Three of the four were included in the Cochrane review; the fourth, the &lt;a href=&quot;https://doi.org/10.1016/s0140-6736(72)92208-8&quot;&gt;Finnish mental hospital study&lt;/a&gt;&lt;sup id=&quot;fnref:97&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:97&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;55&lt;/a&gt;&lt;/sup&gt;, was excluded for using a cluster-randomized design instead of full randomization. All four of these studies supported the hypothesis that reducing SFA improves cardiovascular health, and they had a weighted average relative risk of 0.71 (95% CI 0.62–0.81, see &lt;a href=&quot;https://www.ahajournals.org/cms/10.1161/CIR.0000000000000510/asset/c991ecf0-a90b-4dbc-b9ff-27ad55c8b3a6/assets/graphic/e1fig02.jpeg&quot;&gt;Figure 2&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Two of the studies in the Cochrane review showed an &lt;em&gt;increase&lt;/em&gt; in cardiovascular events when reducing SFA:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The &lt;a href=&quot;https://doi.org/10.1136/bmj.e8707&quot;&gt;Sydney Diet Heart study&lt;/a&gt;&lt;sup id=&quot;fnref:98&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:98&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;56&lt;/a&gt;&lt;/sup&gt; had some study participants replace saturated fat with trans-fat-heavy margarine, which likely explains the increase in bad outcomes.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1136/bmj.1.5449.1531&quot;&gt;The Rose study&lt;/a&gt;&lt;sup id=&quot;fnref:99&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:99&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;57&lt;/a&gt;&lt;/sup&gt; did not have any glaring problems like Sydney Diet Heart, but it lasted for less than 2 years and only had a total of 54 participants. I take it as valid but weak contradictory evidence.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It looks reasonably likely, but not conclusive, that saturated fat is unhealthy. How unhealthy?&lt;/p&gt;

&lt;p&gt;The Cochrane review suggests that an intervention to reduce dietary SFA should prevent one cardiovascular event per 290 person-years and one death per 2300 person-years for the sort of people who participated in these trials (i.e., people with elevated baseline risk of cardiovascular disease).&lt;sup id=&quot;fnref:69&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:69&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;58&lt;/a&gt;&lt;/sup&gt; Compare to exercise, which is associated with a reduction of about 1 death per 300 person-years in individuals with chronic diseases.&lt;sup id=&quot;fnref:74&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:74&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;59&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;It’s not clear how to estimate the improvement in mortality for the general population. Participants in these trials died at approximately the normal rate (compare to &lt;a href=&quot;https://www.cdc.gov/nchs/fastats/deaths.htm&quot;&gt;US CDC mortality statistics&lt;/a&gt;) which suggests the effect should be similar, but it makes sense in theory that dietary interventions should have larger effects on unhealthy populations.&lt;/p&gt;

&lt;h2 id=&quot;people-should-take-omega-3-supplements&quot;&gt;People should take omega-3 supplements&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;There is some evidence that supplementation with the omega-3 fatty acid DHA, found in fish oil, may help maintain brain health[.] (page 200)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;[U]nless they are eating a lot of fatty fish, filling their coffers with marine omega-3 [fatty acids], [my patients] almost always need to take EPA and DHA supplements in capsule or oil form. (page 339)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Omega-3 fatty acids improve brain health: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 75%).&lt;/li&gt;
  &lt;li&gt;Omega-3s improve health in general: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 80%).&lt;/li&gt;
  &lt;li&gt;It’s a good idea for most people to take omega-3 supplements: &lt;strong&gt;somewhat likely&lt;/strong&gt; (credence: 65%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A meta-analysis of RCTs by &lt;a href=&quot;https://doi.org/10.7759/cureus.30091&quot;&gt;Dighriri et al. (2022)&lt;/a&gt;&lt;sup id=&quot;fnref:52&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:52&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;60&lt;/a&gt;&lt;/sup&gt; found that “[c]onsumption of omega-3 improved learning, memory ability, cognitive well-being, and blood flow in the brain.”&lt;/p&gt;

&lt;p&gt;A &lt;a href=&quot;https://doi.org/10.1002/14651858.cd003177.pub5&quot;&gt;Cochrane review&lt;/a&gt;&lt;sup id=&quot;fnref:53&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:53&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;61&lt;/a&gt;&lt;/sup&gt; of RCTs found no statistically significant effect of omega-3 consumption on all-cause mortality, cardiovascular events, stroke, or arrhythmia, and a weakly statistically significant effect on cardiovascular mortality, coronary heart disease mortality, and coronary heart disease events. The non-significant effects were all positive, except for a small negative effect on stroke.&lt;sup id=&quot;fnref:54&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:54&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;62&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;So omega-3s probably improve brain health, and they might have a small effect on heart health but it’s unclear.&lt;/p&gt;

&lt;p&gt;As far as we can tell, there are no significant downsides to dietary omega-3s, so they easily pass a cost-benefit analysis as long as you don’t mind eating omega-3-rich foods.&lt;/p&gt;

&lt;p&gt;The cost-benefit analysis for omega-3 supplementation is a bit murkier because supplements sometimes contain contaminants.&lt;sup id=&quot;fnref:56&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:56&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;63&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:55&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:55&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;64&lt;/a&gt;&lt;/sup&gt; &lt;a href=&quot;https://doi.org/10.1016/j.jfca.2016.09.008&quot;&gt;Raab et al. (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:57&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:57&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;65&lt;/a&gt;&lt;/sup&gt; tested 67 supplements and found that all had safe levels of heavy metals, but did not test for mercury. &lt;a href=&quot;https://doi.org/10.1533/9780857098863.4.389&quot;&gt;Winwood (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:58&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:58&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;66&lt;/a&gt;&lt;/sup&gt; claims that algae oil typically comes from algae grown in tanks and thus can’t be contaminated by heavy metals in the ocean.&lt;/p&gt;

&lt;p&gt;I personally take a daily algae oil supplement. I take algae oil instead of fish oil because I’m vegan, but the potentially reduced risk of contaminants is a nice bonus.&lt;/p&gt;

&lt;h1 id=&quot;sleep&quot;&gt;Sleep&lt;/h1&gt;

&lt;p&gt;I read this chapter more skeptically than the others because it quoted Matthew Walker near the beginning. This raised some alarm bells because Walker wrote a &lt;a href=&quot;https://guzey.com/books/why-we-sleep/&quot;&gt;bad book&lt;/a&gt; about sleep and has been caught &lt;a href=&quot;https://statmodeling.stat.columbia.edu/2019/12/27/why-we-sleep-data-manipulation-a-smoking-gun/&quot;&gt;manipulating data&lt;/a&gt;.&lt;sup id=&quot;fnref:31&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:31&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;67&lt;/a&gt;&lt;/sup&gt; &lt;em&gt;Outlive&lt;/em&gt; cited a few of Walker’s papers to support certain claims, but none of those claims seemed particularly important so I didn’t review them.&lt;sup id=&quot;fnref:29&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:29&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;68&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;every-animal-sleeps&quot;&gt;Every animal sleeps&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;Every animal engages in some form of sleep; scientists have found no exceptions, so far. (page 353)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Every animal sleeps: &lt;strong&gt;somewhat unlikely&lt;/strong&gt; (credence: 35%).&lt;/li&gt;
  &lt;li&gt;It’s reasonable to assert that every animal sleeps: &lt;strong&gt;false&lt;/strong&gt; (credence: 20%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In support of this claim, &lt;em&gt;Outlive&lt;/em&gt; cites &lt;a href=&quot;https://doi.org/10.1371/journal.pbio.0060216&quot;&gt;Cirelli &amp;amp; Tononi (2008)&lt;/a&gt;&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;69&lt;/a&gt;&lt;/sup&gt;, which does not take a strong stance on whether all animals sleep:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Only a small number of species—mostly mammals and birds—have been evaluated in detail with respect to sleep. Most studies found signs of sleep, both behavioral (quiescence and hyporesponsivity) and electrophysiological (e.g., the slow waves of non-rapid eye movement (NREM) sleep). Scientists have been hesitant to attribute sleep to reptiles, amphibians, fish, and especially invertebrates, preferring the noncommittal term “rest” in the absence of electrophysiological signs resembling those of mammals and birds.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Cirelli &amp;amp; Tononi (2008) references some examples of animals that have been claimed not to sleep (particularly bullfrogs), but says the evidence is weak.&lt;/p&gt;

&lt;p&gt;I don’t think Cirelli &amp;amp; Tononi supports Attia’s claim; it would be more accurate to say “no animal has been proven not to sleep”.&lt;/p&gt;

&lt;p&gt;But other sources would disagree with this. For example:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;It now appears that many species reduce sleep for long periods of time under normal conditions and that others do not sleep at all, in the way sleep is conventionally defined.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;From Kushida, C. (2013). Encyclopedia of Sleep, Volume 1, page 38 (h/t &lt;a href=&quot;https://guzey.com/books/why-we-sleep/&quot;&gt;Alexey Guzey&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Attia’s claim “scientists have found no exceptions” is sort of true in the sense that we haven’t found any &lt;em&gt;definitive&lt;/em&gt; exceptions, but the claim “every animal engages in some form of sleep” isn’t well-established either.&lt;/p&gt;

&lt;h2 id=&quot;we-need-to-sleep-75-to-85-hours-a-night&quot;&gt;We need to sleep 7.5 to 8.5 hours a night&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;[M]any, many studies have confirmed what your mother told you: We need to sleep about seven and a half to eight and a half hours a night. (page 354)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment: &lt;strong&gt;false&lt;/strong&gt; (credence: 10%).&lt;/p&gt;

&lt;p&gt;(I could not find a citation for the quoted assertion.)&lt;/p&gt;

&lt;p&gt;The most authoritative source on this question appears to be the &lt;a href=&quot;https://doi.org/10.1016/j.sleh.2014.12.010&quot;&gt;National Sleep Foundation panel&lt;/a&gt;&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;70&lt;/a&gt;&lt;/sup&gt;, where sleep scientists were surveyed on their beliefs. The median panelist believed that 7 to 9 hours a night is “appropriate” for adults age 25–64, and 6 to 10 hours “may be appropriate for some people” in the same age range.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;71&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The range given by &lt;em&gt;Outlive&lt;/em&gt; (7.5 to 8.5 hours) is excessively narrow—according to sleep scientists, many people can/should sleep more or less than that.&lt;/p&gt;

&lt;p&gt;My subjective uncertainty on this question mostly comes from the fact that I haven’t read any studies or even any meta-analyses and I’m only 90% confident that the National Sleep Foundation panelists know what they’re talking about.&lt;/p&gt;

&lt;h2 id=&quot;basketball-players-who-were-told-to-sleep-for-10-hours-a-night-had-better-shooting-accuracy&quot;&gt;Basketball players who were told to sleep for 10 hours a night had better shooting accuracy&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;In one study, Stanford basketball players were encouraged to strive for ten hours of sleep per day, with or without naps, and to abstain from alcohol or caffeine. After five weeks, their shooting accuracy had improved by 9 percent, and their sprint times had also gotten faster. (page 354)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I didn’t look into this study and I’m not giving a credence because I don’t really care about this particular claim. I bring it up because it contradicts the preceding claim that people should sleep 7.5 to 8.5 hours a night.&lt;/p&gt;

&lt;h2 id=&quot;lack-of-sleep-increases-obesity-and-diabetes-risk&quot;&gt;Lack of sleep increases obesity and diabetes risk&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;Even in the short term, sleep deprivation can cause profound insulin resistance. […] Multiple large meta-analyses of sleep studies have revealed a close relationship between sleep duration and risk of type 2 diabetes and the metabolic syndrome. (page 356)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Sleep deprivation increases the risk of insulin resistance: &lt;strong&gt;highly likely&lt;/strong&gt; (credence: 90%).&lt;/li&gt;
  &lt;li&gt;Observational studies find relationships between short sleep duration and obesity/diabetes/metabolic syndrome: &lt;strong&gt;true&lt;/strong&gt; (credence: 95%).&lt;/li&gt;
  &lt;li&gt;Lack of sleep increases obesity and diabetes risk: &lt;strong&gt;highly likely&lt;/strong&gt; (credence: 85%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I only briefly investigated this but it passes a basic sanity check. I glanced at the papers cited by &lt;em&gt;Outlive&lt;/em&gt; and they appear to support the quoted text. In addition, RCTs suggest that sleep restriction causes subjects to eat more and increases insulin resistance (a precursor to diabetes)—see meta-analysis by &lt;a href=&quot;https://doi.org/10.1016/j.metabol.2018.02.010&quot;&gt;Reutrakul &amp;amp; Van Cauter (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:32&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:32&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;72&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&quot;a-study-using-mendelian-randomization-found-that-sleeping-6-hours-a-night-increased-risk-of-a-heart-attack&quot;&gt;A study using Mendelian randomization found that sleeping &amp;lt;6 hours a night increased risk of a heart attack&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;[O]ne particularly interesting study compared observational and Mendelian randomization&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;73&lt;/a&gt;&lt;/sup&gt; data in people with previous identified genetic variants that either increase or decrease their lifelong exposure to longer or shorter sleep duration. The MR data confirmed the observational findings, that sleeping less than six hours a night was associated with about a 20 percent higher risk of a heart attack. (page 359)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A study found that sleeping &amp;lt;6 hours a night increases heart attack risk: &lt;strong&gt;true&lt;/strong&gt; (credence: 98%).&lt;/li&gt;
  &lt;li&gt;Sleeping &amp;lt;6 hours a night increases heart disease risk for most people: &lt;strong&gt;likely true&lt;/strong&gt; (credence: 80%).&lt;/li&gt;
  &lt;li&gt;This quote from the book does a good job of representing the state of the evidence: &lt;strong&gt;false&lt;/strong&gt; (credence: 10%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In support of this quote, Attia cites &lt;a href=&quot;https://doi.org/10.1038/s41467-019-08917-4&quot;&gt;Dashti et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;74&lt;/a&gt;&lt;/sup&gt;, which does not appear to say anything about heart attacks. I believe he meant to cite &lt;a href=&quot;https://doi.org/10.1016/j.jacc.2019.07.022&quot;&gt;Daghlas et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;75&lt;/a&gt;&lt;/sup&gt; (on which Dashti is a co-author). Daghlas et al. (2019) supports Attia’s claim.&lt;/p&gt;

&lt;p&gt;However, other Mendelian randomization studies have gotten different results. &lt;a href=&quot;https://doi.org/10.1186/s12944-020-01257-z&quot;&gt;Zhuang et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;76&lt;/a&gt;&lt;/sup&gt; found no significant relationship between sleep duration and coronary heart disease; &lt;a href=&quot;https://doi.org/10.1002%2Fehf2.14016&quot;&gt;Yang et al. (2022)&lt;/a&gt;&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;77&lt;/a&gt;&lt;/sup&gt; found a statistically significant but extremely weak (“probably not clinically relevant”) association&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;78&lt;/a&gt;&lt;/sup&gt;; while &lt;a href=&quot;https://doi.org/10.1016/j.sleep.2019.08.014&quot;&gt;Liao et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;79&lt;/a&gt;&lt;/sup&gt; broadly agreed with Daghlas et al. (2019).&lt;/p&gt;

&lt;p&gt;Out of four Mendelian randomization studies, two identified strong links between short sleep and heart disease/heart attack risk, and two suggested little to no effect. So while &lt;em&gt;Outlive&lt;/em&gt; does accurately describe the results of a study, it misrepresents the evidence by ignoring the studies that contradict its thesis.&lt;/p&gt;

&lt;p&gt;(I did not look into the quality of these studies, I just read their conclusions. It’s possible that the null-result studies are flawed in some way.)&lt;/p&gt;

&lt;p&gt;The Mendelian randomization studies provide only weak to moderate evidence, but when combined with other evidence (such as the link to obesity discussed in the previous section), it appears reasonably likely that short sleep duration does indeed increase the risk of heart problems.&lt;/p&gt;

&lt;h2 id=&quot;lack-of-sleep-causes-alzheimers-disease&quot;&gt;Lack of sleep causes Alzheimer’s disease&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;Subsequent research … has pointed to chronic bad sleep as a powerful potential cause of Alzheimer’s disease and dementia. Sleep, it turns out, is as crucial to maintaining brain health as it is to brain function.&lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Lack of sleep causes Alzheimer’s disease: &lt;strong&gt;possibly true&lt;/strong&gt; (credence: 60%).&lt;/li&gt;
  &lt;li&gt;The first sentence of the book quote is reasonable: &lt;strong&gt;true&lt;/strong&gt; (credence: 90%).&lt;/li&gt;
  &lt;li&gt;The second sentence of the book quote is reasonable: &lt;strong&gt;false&lt;/strong&gt; (credence: 20%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some research has indeed found a link between bad sleep and Alzheimer’s, but it’s difficult to establish causality—see review article by &lt;a href=&quot;https://doi.org/10.3390%2Fijms21031168&quot;&gt;Lloret et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:46&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:46&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;80&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;The first quoted sentence from &lt;em&gt;Outlive&lt;/em&gt; aligns with Lloret et al.’s summary of the literature. The second sentence converts a speculative hypothesis into a certainty.&lt;/p&gt;

&lt;h1 id=&quot;bonus&quot;&gt;Bonus&lt;/h1&gt;

&lt;h2 id=&quot;dunning-kruger-effect&quot;&gt;Dunning-Kruger effect&lt;/h2&gt;

&lt;blockquote&gt;
  &lt;p&gt;Looking back, I now realize that I was too far on the left of the Dunning-Kruger curve, caricatured below in figure 14—my maximal confidence and relatively minimal knowledge having propelled me quite close to the summit of “Mount Stupid.” (page 293)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The book’s Figure 14 reproduces this image from Wikimedia Commons:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://upload.wikimedia.org/wikipedia/commons/4/46/Dunning%E2%80%93Kruger_Effect_01.svg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;My assessment:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;This graph accurately represents the Dunning-Kruger effect: &lt;strong&gt;false&lt;/strong&gt; (credence: &amp;lt;1%).&lt;/li&gt;
  &lt;li&gt;The existence of a “Mount Stupid” is supported by the scientific evidence: &lt;strong&gt;false&lt;/strong&gt; (credence: 2%).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m willing to forgive this mistake because it doesn’t have anything to do with longevity, but it still bugs me.&lt;/p&gt;

&lt;p&gt;If you search “Dunning Kruger” on Google Images, you will see a bunch of graphs that look like that, but no study on the Dunning-Kruger effect has ever produced empirical data with that shape.&lt;/p&gt;

&lt;p&gt;Empirical results actually look something like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://upload.wikimedia.org/wikipedia/commons/4/43/Dunning%E2%80%93Kruger_Effect2.svg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(Presumably, Attia found Figure 14 from the Wikipedia page on the Dunning-Kruger effect. I can’t really blame him for getting something wrong if he just pulled it from Wikipedia. And to Wikipedia’s credit, it has removed the incorrect image and replaced it with the correct one above.)&lt;/p&gt;

&lt;h1 id=&quot;changelog&quot;&gt;Changelog&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;2024-10-23: I added a reference to a meta-analysis of RCTs on balance training specificity. This provides stronger evidence for my previously weakly-held position that balance training isn’t useful. I updated my credences accordingly.&lt;/li&gt;
  &lt;li&gt;2025-05-05: I added three new sections on exercise:
    &lt;ol&gt;
      &lt;li&gt;&lt;a href=&quot;#vo2max-is-the-best-predictor-of-longevity&quot;&gt;VO2max is the best predictor of longevity&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#you-should-train-vo2max-by-doing-hiit-at-the-maximum-sustainable-pace&quot;&gt;You should train VO2max by doing HIIT at the maximum sustainable pace&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#you-should-do-3-hoursweek-of-zone-2-training-and-one-or-two-sessionsweek-of-hiit&quot;&gt;You should do &amp;gt;3 hours/week of zone 2 training and one or two sessions/week of HIIT&lt;/a&gt;&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;2025-07-03: Previously I was inconsistent with how I reported credences for statements I thought were probably false—sometimes I reported my probability that the statement is true, and sometimes my probability that it’s false. I rewrote them so that all credences are given in terms of my probability that the statement is true.&lt;/li&gt;
  &lt;li&gt;2025-07-04: Made some minor changes in response to feedback.&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The book is co-authored by Bill Gifford. It’s written from Attia’s point of view and some materials (such as &lt;a href=&quot;https://peterattiamd.com/outlive/&quot;&gt;Peter Attia’s website&lt;/a&gt;) approximately treat Attia as the sole author, so in my review I will credit the book’s claims to Attia and not to Gifford. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Page numbers are from the 2023 Kindle edition of &lt;em&gt;Outlive&lt;/em&gt;, ISBN 9780593236598, ebook ISBN 9780593236604. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:59&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The meta-analysis did definitely find this result, but there’s some wiggle room around what “technically correct” means (because the meta-analysis found different results for different subgroups—I will discuss this shortly). So I’m only 90% confident. &lt;a href=&quot;#fnref:59&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Stefan, N., Schick, F., &amp;amp; Häring, H. U. (2017). &lt;a href=&quot;https://doi.org/10.1016/j.cmet.2017.07.008&quot;&gt;Causes, Characteristics, and Consequences of Metabolically Unhealthy Normal Weight in Humans.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:39&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kramer, C. K., Zinman, B., and Retnakaran, R (2013). &lt;a href=&quot;/materials/kramer2013.pdf&quot;&gt;Are metabolically healthy overweight and obesity benign conditions? A systematic review and meta-analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:39&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:40&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The standard method for significance-testing a relative risk is to assume that its logarithm follows a normal distribution. I did that and got an odds ratio of 7.27. But the sample mean of 1.19 is pretty far from the geometric mean of the 95% CI (1.16), and much closer to its arithmetic mean (1.18), so I redid the test with the assumption that the RR follows a normal distribution. The second method produces an odds ratio of 5.66, which I rounded down to 5 to be conservative.&lt;/p&gt;

      &lt;p&gt;It’s bad practice to run two different significance tests, but I think it’s okay in this case because I preferred the test that weakened my argument rather than strengthening it. &lt;a href=&quot;#fnref:40&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:42&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The original article used PubMed links, which I replaced with DOI links and added full citations in footnotes. The quote is otherwise unchanged. &lt;a href=&quot;#fnref:42&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:41&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lin, H., Zhang, L., Zheng, R., &amp;amp; Zheng, Y. (2017). &lt;a href=&quot;https://doi.org/10.1097/md.0000000000008838&quot;&gt;The prevalence, metabolic risk and effects of lifestyle intervention for metabolically healthy obesity.&lt;/a&gt; &lt;a href=&quot;#fnref:41&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Eckel, N., Meidtner, K., Kalle-Uhlmann, T., Stefan, N., and Schulze, M. B (2016). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/26701871/&quot;&gt;Metabolically healthy obesity and cardiovascular events: a systematic review and meta-analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kim, T. J., Shin, H. Y., Chang, Y., Kang, M., Jee, J., Choi, Y. H., Ahn, H. S., Ahn, S. H., Son, H. J., &amp;amp; Ryu, S. (2017). &lt;a href=&quot;https://doi.org/10.1016/j.atherosclerosis.2017.03.035&quot;&gt;Metabolically healthy obesity and the risk for subclinical atherosclerosis.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Chang, Y., Jung, H. S., Cho, J., Zhang, Y., Yun, K. E., Lazo, M., Pastor-Barriuso, R., Ahn, J., Kim, C. W., Rampal, S., Cainzos-Achirica, M., Zhao, D., Chung, E. C., Shin, H., Guallar, E., &amp;amp; Ryu, S. (2016). &lt;a href=&quot;https://doi.org/10.1038/ajg.2016.178&quot;&gt;Metabolically Healthy Obesity and the Development of Nonalcoholic Fatty Liver Disease.&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Chang, A. R., Surapaneni, A., Kirchner, H. L., Young, A., Kramer, H. J., Carey, D. J., Appel, L. J., &amp;amp; Grams, M. E. (2018). &lt;a href=&quot;https://doi.org/10.1002/oby.22134&quot;&gt;Metabolically Healthy Obesity and Risk of Kidney Function Decline.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bell, J. A., Kivimaki, M., &amp;amp; Hamer, M. (2014). &lt;a href=&quot;https://doi.org/10.1111/obr.12157&quot;&gt;Metabolically healthy obesity and risk of incident type 2 diabetes: a meta‐analysis of prospective cohort studies.&lt;/a&gt; &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:60&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;When Attia says the amyloid theory was weakening, he’s referring to the once-popular hypothesis that amyloid-beta is the sole cause of Alzheimer’s. That hypothesis now appears to be false, but amyloid-beta still looks relevant to Alzheimer’s somehow (it’s not quite clear how). &lt;a href=&quot;#fnref:60&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:107&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Actually, I have very good reading comprehension in a relative sense—in the 98th or 99th percentile according to standardized tests. But 98th percentile reading comprehension still isn’t good enough to consistently understand the things you read, apparently. &lt;a href=&quot;#fnref:107&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:116&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m not confident in the claim that they’re more accurate. I am not aware of any research directly comparing the predictive power of VO2max itself vs. performance tests. In theory, I would expect performance tests to be better predictors because they’re directly measuring your body’s physical capabilities. &lt;a href=&quot;#fnref:116&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:111&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some special-purpose interventions might work better. For example, if you’re a heavy smoker, quitting smoking might have a bigger effect than starting exercise. &lt;a href=&quot;#fnref:111&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:112&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kokkinos, P., Faselis, C., Samuel, I. B. H., Pittaras, A., Doumas, M., Murphy, R., Heimall, M. S. et al. (2022). &lt;a href=&quot;https://doi.org/10.1016/j.jacc.2022.05.031&quot;&gt;Cardiorespiratory Fitness and Mortality Risk Across the Spectra of Age, Race, and Sex.&lt;/a&gt; &lt;a href=&quot;#fnref:112&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:113&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mandsager, K., Harb, S., Cremer, P., Phelan, D., Nissen, S. E., &amp;amp; Jaber, W. (2018). &lt;a href=&quot;https://doi.org/10.1001/jamanetworkopen.2018.3605&quot;&gt;Association of Cardiorespiratory Fitness With Long-term Mortality Among Adults Undergoing Exercise Treadmill Testing.&lt;/a&gt; &lt;a href=&quot;#fnref:113&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:114&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Harber, M. P., Kaminsky, L. A., Arena, R., Blair, S. N., Franklin, B. A., Myers, J., &amp;amp; Ross, R. (2017). &lt;a href=&quot;https://doi.org/10.1016/j.pcad.2017.03.001&quot;&gt;Impact of Cardiorespiratory Fitness on All-Cause and Disease-Specific Mortality: Advances Since 2009.&lt;/a&gt; &lt;a href=&quot;#fnref:114&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:120&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Wen, D., Utesch, T., Wu, J., Robertson, S., Liu, J., Hu, G., &amp;amp; Chen, H. (2019). &lt;a href=&quot;https://doi.org/10.1016/j.jsams.2019.01.013&quot;&gt;Effects of different protocols of high intensity interval training for VO2max improvements in adults: A meta-analysis of randomised controlled trials.&lt;/a&gt; &lt;a href=&quot;#fnref:120&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:120:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:120:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:121&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Seiler, S. (2010). &lt;a href=&quot;https://doi.org/10.1123/ijspp.5.3.276&quot;&gt;What is Best Practice for Training Intensity and Duration Distribution in Endurance Athletes?.&lt;/a&gt; &lt;a href=&quot;#fnref:121&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:121:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:121:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:126&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Billat, L. V. (2001). &lt;a href=&quot;https://doi.org/10.2165/00007256-200131010-00002&quot;&gt;Interval Training for Performance: A Scientific and Empirical Practice.&lt;/a&gt; &lt;a href=&quot;#fnref:126&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:122&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Billat, V., Renoux, J. C., Pinoteau, J., Petit, B., &amp;amp; Koralsztein, J. P. (1995). &lt;a href=&quot;https://doi.org/10.3109/13813459508996126&quot;&gt;Times to exhaustion at 90,100 and 105% of velocity at V̇O&lt;sub&gt;2&lt;/sub&gt;max (Maximal aerobic speed) and critical speed in elite longdistance runners.&lt;/a&gt; &lt;a href=&quot;#fnref:122&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:124&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;/materials/blondel2001.pdf&quot;&gt;Blondel et al. (2001)&lt;/a&gt;&lt;sup id=&quot;fnref:125&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:125&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;81&lt;/a&gt;&lt;/sup&gt; found that a sample of physically active (but not elite) students could sustain 90% of VO2max for an average of 13.98 minutes. &lt;a href=&quot;#fnref:124&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:130&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Different people use terminology in different ways. I am using LIT to refer to what Attia calls zone 2. Some studies call it moderate-intensity continuous training (MICT). Colloquially, it refers to an exercise intensity that you can sustain for a long time. Technically, it refers to exercise at or below the &lt;a href=&quot;https://en.wikipedia.org/wiki/Lactate_threshold&quot;&gt;lactate threshold&lt;/a&gt;. &lt;a href=&quot;#fnref:130&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:127&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Crowley, E., Powell, C., Carson, B. P., &amp;amp; W. Davies, R. (2022). &lt;a href=&quot;https://doi.org/10.1155/2022/9310710&quot;&gt;The Effect of Exercise Training Intensity on VO2max in Healthy Adults: An Overview of Systematic Reviews and Meta-Analyses.&lt;/a&gt; &lt;a href=&quot;#fnref:127&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:128&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;It may be possible to resolve this inconsistency by digging deeper into the literature. Some relevant questions:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Wen et al. (2019)&lt;sup id=&quot;fnref:120:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:120&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt; found that long-duration HIIT worked better than short-duration. For the studies that find no benefit to HIIT over LIT, are they only looking at short-duration HIIT?&lt;/li&gt;
        &lt;li&gt;Some RCTs match volume between groups. So if the HIIT group spends a total of (say) 16 minutes at high intensity, then the LIT group exercises for 16 minutes total. That’s not how people actually exercise. Do meta-analyses understate the benefits of LIT because they include volume-matched studies?&lt;/li&gt;
        &lt;li&gt;How do the returns to HIIT vs. LIT differ for novice vs. experienced athletes?&lt;/li&gt;
        &lt;li&gt;What happens when you combine HIIT with LIT?&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:128&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:118&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mølmen, K. S., Almquist, N. W., &amp;amp; Skattebo, Ø. (2024). &lt;a href=&quot;https://doi.org/10.1007/s40279-024-02120-2&quot;&gt;Effects of Exercise Training on Mitochondrial and Capillary Growth in Human Skeletal Muscle: A Systematic Review and Meta-Regression.&lt;/a&gt; &lt;a href=&quot;#fnref:118&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:123&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Stöggl, T. L., &amp;amp; Sperlich, B. (2015). &lt;a href=&quot;https://doi.org/10.3389/fphys.2015.00295&quot;&gt;The training intensity distribution among well-trained and elite endurance athletes.&lt;/a&gt; &lt;a href=&quot;#fnref:123&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:123:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Frank, C., Kobesova, A., and Kolar, P (2013). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3578435/&quot;&gt;Dynamic neuromuscular stabilization &amp;amp; sports rehabilitation.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:33&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’m thinking in particular of &lt;a href=&quot;https://doi.org/10.1519/JSC.0000000000004655&quot;&gt;Chiu (2023)&lt;/a&gt;&lt;sup id=&quot;fnref:34&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:34&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;82&lt;/a&gt;&lt;/sup&gt; which investigated knee valgus, a squat technique that was generally regarded as bad form, and found that it may be better than “correct” form. h/t &lt;a href=&quot;https://www.youtube.com/watch?v=UOWQUNZRVtU&quot;&gt;Menno Henselmans&lt;/a&gt;. &lt;a href=&quot;#fnref:33&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:88&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Sherrington, C., Tiedemann, A., Fairhall, N., Close, J. C. T., &amp;amp; Lord, S. R. (2011). &lt;a href=&quot;https://doi.org/10.1071/NB10056&quot;&gt;Exercise to prevent falls in older adults: an updated meta-analysis and best practice recommendations.&lt;/a&gt;. doi: &lt;a href=&quot;https://doi.org/10.1071/nb10056&quot;&gt;10.1071/nb10056&lt;/a&gt; &lt;a href=&quot;#fnref:88&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:89&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;de Souto Barreto, P., Rolland, Y., Vellas, B., &amp;amp; Maltais, M. (2019). &lt;a href=&quot;https://doi.org/10.1001/jamainternmed.2018.5406&quot;&gt;Association of Long-term Exercise Training With Risk of Falls, Fractures, Hospitalizations, and Mortality in Older Adults.&lt;/a&gt; &lt;a href=&quot;#fnref:89&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:110&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kümmel, J., Kramer, A., Giboin, L. S., &amp;amp; Gruber, M. (2016). &lt;a href=&quot;https://link.springer.com/article/10.1007/s40279-016-0515-z&quot;&gt;Specificity of Balance Training in Healthy Individuals: A Systematic Review and Meta-Analysis.&lt;/a&gt;. doi: &lt;a href=&quot;https://doi.org/10.1007/s40279-016-0515-z&quot;&gt;10.1007/s40279-016-0515-z&lt;/a&gt; &lt;a href=&quot;#fnref:110&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Colman, R. J., Anderson, R. M., Johnson, S. C., Kastman, E. K., Kosmatka, K. J., Beasley, T. M., Allison, D. B., Cruzen, C., Simmons, H. A., Kemnitz, J. W., &amp;amp; Weindruch, R. (2009). &lt;a href=&quot;https://doi.org/10.1126/science.1173635&quot;&gt;Caloric Restriction Delays Disease Onset and Mortality in Rhesus Monkeys.&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mattison, J. A., Roth, G. S., Beasley, T. M., Tilmont, E. M., Handy, A. M., Herbert, R. L., Longo, D. L., Allison, D. B., Young, J. E., Bryant, M., Barnard, D., Ward, W. F., Qi, W., Ingram, D. K., &amp;amp; de Cabo, R. (2012). &lt;a href=&quot;https://doi.org/10.1038/nature11432&quot;&gt;Impact of caloric restriction on health and survival in rhesus monkeys from the NIA study.&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mattison, J. A., Colman, R. J., Beasley, T. M., Allison, D. B., Kemnitz, J. W., Roth, G. S., Ingram, D. K., Weindruch, R., de Cabo, R., &amp;amp; Anderson, R. M. (2017). &lt;a href=&quot;https://doi.org/10.1038/ncomms14063&quot;&gt;Caloric restriction improves health and survival of rhesus monkeys.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:49&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Actually, the NIA young female cohort did see a statistically significant reduction in lifespan (p = 0.04), but it becomes non-significant if you do a Bonferroni correction. &lt;a href=&quot;#fnref:49&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:48&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mean and 95% confidence intervals for change in longevity from various study cohorts:&lt;/p&gt;

      &lt;table&gt;
        &lt;thead&gt;
          &lt;tr&gt;
            &lt;th&gt;Cohort&lt;/th&gt;
            &lt;th&gt;Mean&lt;/th&gt;
            &lt;th&gt;95% CI&lt;/th&gt;
          &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
          &lt;tr&gt;
            &lt;td&gt;UW male&lt;/td&gt;
            &lt;td&gt;1.58&lt;/td&gt;
            &lt;td&gt;(-1.56, 4.72)&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;UW female&lt;/td&gt;
            &lt;td&gt;2.22&lt;/td&gt;
            &lt;td&gt;(-2.74, 7.18)&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;NIA young male&lt;/td&gt;
            &lt;td&gt;-2.29&lt;/td&gt;
            &lt;td&gt;(-7.05, 2.47)&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;NIA young female&lt;/td&gt;
            &lt;td&gt;-4.79&lt;/td&gt;
            &lt;td&gt;(-8.88, -0.70)&lt;/td&gt;
          &lt;/tr&gt;
        &lt;/tbody&gt;
      &lt;/table&gt;

      &lt;p&gt;The CI for NIA young male contains the mean for UW male but the reverse is not true, and neither female cohort’s CI contains the mean of the other. &lt;a href=&quot;#fnref:48&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:51&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If true, we would expect to find that calorie restriction worked for the NIA young male cohort because they ate about as much as their UW counterparts. But it didn’t work (in fact it shortened the average lifespan). &lt;a href=&quot;#fnref:51&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:50&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I can think of several other hypotheses.&lt;/p&gt;

      &lt;p&gt;The UW control group ate &lt;em&gt;ad libitum&lt;/em&gt;, which is fancy academic language for “as much as they want”. The NIA control group didn’t eat &lt;em&gt;ad libitum&lt;/em&gt;. Instead, the researchers used previous data to determine how much monkeys tend to eat when fed &lt;em&gt;ad libitum&lt;/em&gt; (controlling for age and bodyweight) and then fed subjects exactly that amount.&lt;/p&gt;

      &lt;p&gt;This brings to mind another hypothesis: Maybe some portion (let’s say 1/3) of monkeys tend to overeat, which causes health problems, and calorie restriction mainly only benefits the 1/3 who overeat. If the NIA control monkeys &lt;em&gt;all&lt;/em&gt; received diets based on how much the &lt;em&gt;average&lt;/em&gt; monkey eats, that prevents the most gluttonous 1/3 from overeating, so additional calorie restriction doesn’t produce meaningful benefits.&lt;/p&gt;

      &lt;p&gt;You could test this hypothesis using the UW data by dividing monkeys in the calorie restriction cohort into “high-calorie” and “low-calorie” group based on how much they ate &lt;em&gt;ad libitum&lt;/em&gt; (controlling for age and bodyweight) and seeing if the high-calorie group had a bigger improvement in longevity than the low-calorie group. The groups would have small sample sizes so the result probably wouldn’t be statistically significant.&lt;/p&gt;

      &lt;p&gt;Some weak supporting evidence: in the UW study, the median longevity improvement was bigger than the mean improvement.&lt;/p&gt;

      &lt;p&gt;A fourth hypothesis: calorie restriction has a U-shaped effect on longevity, where a little calorie restriction helps, but excess calorie restriction increases mortality. (This is clearly true in the limit—100% calorie restriction certainly isn’t healthy.)&lt;/p&gt;

      &lt;p&gt;The studies weakly contradict this hypothesis. &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5247583/figure/f4/&quot;&gt;Figure 4&lt;/a&gt; shows that NIA young males in the control group ate about as much as UW males, while in the other three control group pairings (NIA young female + UW female, NIA old male + UW male, NIA old female + UW female), the NIA group ate less. This predicts that calorie restriction should improve longevity in the NIA young male cohort but have a smaller or negative effect in the other three cohorts. But that’s not what the NIA study found. Instead, the young male and young female cohorts both saw decreased longevity from calorie restriction, and both old cohorts saw approximately no effect.&lt;/p&gt;

      &lt;p&gt;A fifth hypothesis: the studies have fundamental methodological issues that render the results invalid. The studies weakly support this hypothesis given how many peculiar and seemingly-contradictory findings I was able to identify.&lt;/p&gt;

      &lt;p&gt;I don’t know what those methodological issues might be. They could be things like:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;the different cohorts were managed by different researchers who used inconsistent procedures&lt;/li&gt;
        &lt;li&gt;the cohorts had relevantly different genetic lineages&lt;/li&gt;
        &lt;li&gt;the researchers fabricated data (probably not, but you never know)&lt;/li&gt;
        &lt;li&gt;there was a mold infestation next to the control group’s cages&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;A sixth hypothesis: calorie restriction works, but only if you live in Wisconsin. &lt;a href=&quot;#fnref:50&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:50:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mattison, J. A., Black, A., Huck, J., Moscrip, T., Handy, A., Tilmont, E., Roth, G. S., Lane, M. A., &amp;amp; Ingram, D. K. (2005). &lt;a href=&quot;https://doi.org/10.1016/j.neurobiolaging.2004.09.013&quot;&gt;Age-related decline in caloric intake and motivation for food in rhesus monkeys.&lt;/a&gt; &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Yamada, Y., Colman, R. J., Kemnitz, J. W., Baum, S. T., Anderson, R. M., Weindruch, R., &amp;amp; Schoeller, D. A. (2013). &lt;a href=&quot;https://doi.org/10.1016/j.exger.2013.08.002&quot;&gt;Long-term calorie restriction decreases metabolic cost of movement and prevents decrease of physical activity during aging in rhesus monkeys.&lt;/a&gt; &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:35&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://doi.org/10.1161/CIRCRESAHA.115.306883&quot;&gt;Ortega et al. (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:37&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:37&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;83&lt;/a&gt;&lt;/sup&gt; summarizes the relevant literature. Several observational studies have found that overweight but physically fit individuals have little to no increase in mortality rates relative to normal-weight fit individuals (the largest study&lt;sup id=&quot;fnref:36&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;84&lt;/a&gt;&lt;/sup&gt; found a statistically significant but small effect (RR = 1.1); three other studies found no significant effect). By comparison, unfit people had 2–3x higher mortality than fit individuals. &lt;a href=&quot;https://www.ahajournals.org/cms/10.1161/CIRCRESAHA.115.306883/asset/b702242c-ad02-4b6c-95b7-b60f578a8c73/assets/graphic/1752fig02.jpeg&quot;&gt;Figure 2&lt;/a&gt; (reproduced below) summarizes the results from the four studies.&lt;/p&gt;

      &lt;p&gt;&lt;img src=&quot;/assets/images/Ortega-2016-Figure-2.jpeg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

      &lt;p&gt;This finding from Ortega et al. (2016) is actually stronger than necessary for our purposes. It would be sufficient to say that exercise cancels out the harm of high-calorie diets by burning off the excess calories. But this shows that exercise (mostly) cancels out the harm &lt;em&gt;even for people who don’t lose weight&lt;/em&gt;. &lt;a href=&quot;#fnref:35&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:62&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hooper, L., Martin, N., Jimoh, O. F., Kirk, C., Foster, E., &amp;amp; Abdelhamid, A. S. (2020). &lt;a href=&quot;https://doi.org/10.1002/14651858.cd011737.pub3&quot;&gt;Reduction in saturated fat intake for cardiovascular disease.&lt;/a&gt; &lt;a href=&quot;#fnref:62&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:62:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Arguably, my credence for this latter claim should be higher than for the former claim, because reducing saturated fat has some chance of improving health and essentially no chance of harming health. But reducing saturated fat also has some costs (it makes your diet harder to follow).&lt;/p&gt;

      &lt;p&gt;In other words, on a cost-benefit analysis aimed at maximizing health, it’s clearly worth it to eat less saturated fat. But on an all-things-considered cost-benefit analysis, there’s more room for debate. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:64&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Zong, G., Li, Y., Wanders, A. J., Alssema, M., Zock, P. L., Willett, W. C., Hu, F. B. et al. (2016). &lt;a href=&quot;https://doi.org/10.1136/bmj.i5796&quot;&gt;Intake of individual saturated fatty acids and risk of coronary heart disease in US men and women: two prospective longitudinal cohort studies.&lt;/a&gt; &lt;a href=&quot;#fnref:64&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:65&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Heileson, J. L. (2019). &lt;a href=&quot;https://doi.org/10.1093/nutrit/nuz091&quot;&gt;Dietary saturated fat and heart disease: a narrative review.&lt;/a&gt; &lt;a href=&quot;#fnref:65&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:68&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mensink, R. P., &amp;amp; World Health Organization (2016). &lt;a href=&quot;https://iris.who.int/bitstream/handle/10665/246104/9789241565349-eng.pdf&quot;&gt;Effects of saturated fatty acids on serum lipids and lipoproteins: a systematic review and regression analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:68&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:67&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Navarese, E. P., Robinson, J. G., Kowalewski, M., Kolodziejczak, M., Andreotti, F., Bliden, K., Tantry, U. et al. (2018). &lt;a href=&quot;https://doi.org/10.1001/jama.2018.2525&quot;&gt;Association Between Baseline LDL-C Level and Total and Cardiovascular Mortality After LDL-C Lowering.&lt;/a&gt; &lt;a href=&quot;#fnref:67&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:66&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ennezat, P. V., Guerbaai, R. A., Maréchaux, S., Le Jemtel, T. H., &amp;amp; François, P. (2022). &lt;a href=&quot;https://doi.org/10.1097/fjc.0000000000001345&quot;&gt;Extent of LDL-cholesterol Reduction and All-cause and Cardiovascular Mortality Benefit: a Systematic Review and Meta-analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:66&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:71&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bergeron, N., Chiu, S., Williams, P. T., M King, S., &amp;amp; Krauss, R. M. (2019). &lt;a href=&quot;https://doi.org/10.1093/ajcn/nqz035&quot;&gt;Effects of red meat, white meat, and nonmeat protein sources on atherogenic lipoprotein measures in the context of low compared with high saturated fat intake: a randomized controlled trial.&lt;/a&gt; &lt;a href=&quot;#fnref:71&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:90&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Sacks, F. M., Lichtenstein, A. H., Wu, J. H. Y., Appel, L. J., Creager, M. A., Kris-Etherton, P. M., Miller, M. et al. (2017). &lt;a href=&quot;https://www.ahajournals.org/doi/full/10.1161/CIR.0000000000000510&quot;&gt;Dietary Fats and Cardiovascular Disease: A Presidential Advisory From the American Heart Association.&lt;/a&gt;. doi: &lt;a href=&quot;https://doi.org/10.1161/cir.0000000000000510&quot;&gt;10.1161/cir.0000000000000510&lt;/a&gt; &lt;a href=&quot;#fnref:90&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:97&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Miettinen, M., Karvonen, M., Turpeinen, O., Elosuo, R., &amp;amp; Paavilainen, E. (1972). &lt;a href=&quot;https://doi.org/10.1016/s0140-6736(72)92208-8&quot;&gt;EFFECT OF CHOLESTEROL-LOWERING DIET ON MORTALITY FROM CORONARY HEART-DISEASE AND OTHER CAUSES.&lt;/a&gt; &lt;a href=&quot;#fnref:97&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:98&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ramsden, C. E., Zamora, D., Leelarthaepin, B., Majchrzak-Hong, S. F., Faurot, K. R., Suchindran, C. M., Ringel, A. et al. (2013). &lt;a href=&quot;https://doi.org/10.1136/bmj.e8707&quot;&gt;Use of dietary linoleic acid for secondary prevention of coronary heart disease and death: evaluation of recovered data from the Sydney Diet Heart Study and updated meta-analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:98&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:99&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Rose, G. A., Thomson, W. B., &amp;amp; Williams, R. T. (1965). &lt;a href=&quot;https://doi.org/10.1136/bmj.1.5449.1531&quot;&gt;Corn Oil in Treatment of Ischaemic Heart Disease.&lt;/a&gt; &lt;a href=&quot;#fnref:99&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:69&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The Cochrane review &lt;a href=&quot;https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD011737.pub3/full#CD011737-sec-0008&quot;&gt;summary of findings&lt;/a&gt; reports that the interventions prevented 2 deaths per 1000 participants and the interventions lasted an average of 56 months, which equates to one death per 2300 pers &lt;a href=&quot;#fnref:69&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:74&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;See &lt;a href=&quot;https://doi.org/10.1001/jamainternmed.2015.0533&quot;&gt;Arem et al. (2015)&lt;/a&gt;&lt;sup id=&quot;fnref:75&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:75&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;85&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://doi.org/10.1161/circulationaha.121.058162&quot;&gt;Lee et al. (2022)&lt;/a&gt;&lt;sup id=&quot;fnref:73&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:73&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;86&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

      &lt;p&gt;These were pooled analyses of cohort studies, not RCTs. I could not quickly find reliable RCT numbers.&lt;/p&gt;

      &lt;p&gt;Neither of these studies reported deaths prevented per person-year. I calculated the numbers using provided relative risks multiplied by number of deaths per person, divided by follow-up time. Number of years per death prevented varied based on exercise duration and intensity. I found that exercise prevented approximately one death per 300 years when defining “exercise” as 7.5+ MET-hours/week for Arem et al. (2015) and 150–224 minutes of moderate physical activity for Lee et al. (2022) (these two definitions are roughly equivalent).&lt;/p&gt;

      &lt;p&gt;There are a number of meta-analyses of RCTs, for example:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1016/j.ahj.2011.07.017&quot;&gt;Lawler et al. (2011)&lt;/a&gt;&lt;sup id=&quot;fnref:78&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:78&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;87&lt;/a&gt;&lt;/sup&gt; found exercise had a relative risk (RR) of 0.74 on all-cause mortality for individuals who had experienced heart attacks.&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1177/1534735420917462&quot;&gt;Morishita et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:79&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:79&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;88&lt;/a&gt;&lt;/sup&gt; found RR = 0.76 on cancer mortality for cancer patients.&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;However, neither of these meta-analyses provided per-group mortality numbers or mean intervention length, so I can’t determine the number of years per death prevented without reading through every individual study. Based on the RRs, my guess is that these meta-analyses would give roughly similar numbers to the pooled analyses above.&lt;/p&gt;

      &lt;p&gt;To my knowledge, the most comprehensive meta-analysis is &lt;a href=&quot;https://doi.org/10.1186/s12889-020-09855-3&quot;&gt;Posadzki et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:81&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:81&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;89&lt;/a&gt;&lt;/sup&gt;, which reviewed 150 different Cochrane reviews and found an RR of 0.87 for all-cause mortality. But it provides even less information about the participants so I have no way of interpreting this number. &lt;a href=&quot;#fnref:74&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:52&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Dighriri, I. M., Alsubaie, A. M., Hakami, F. M., Hamithi, D. M., Alshekh, M. M., Khobrani, F. A., Dalak, F. E. et al. (2022). &lt;a href=&quot;https://doi.org/10.7759/cureus.30091&quot;&gt;Effects of Omega-3 Polyunsaturated Fatty Acids on Brain Functions: A Systematic Review.&lt;/a&gt; &lt;a href=&quot;#fnref:52&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:53&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Abdelhamid, A. S., Brown, T. J., Brainard, J. S., Biswas, P., Thorpe, G. C., Moore, H. J., Deane, K. H. et al. (2020). &lt;a href=&quot;https://doi.org/10.1002/14651858.cd003177.pub5&quot;&gt;Omega-3 fatty acids for the primary and secondary prevention of cardiovascular disease.&lt;/a&gt; &lt;a href=&quot;#fnref:53&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:54&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;When talking about the &lt;a href=&quot;#rhesus-monkey-studies-suggest-that-calorie-restriction-improves-longevity-but-only-if-you-eat-a-fairly-unhealthy-diet&quot;&gt;calorie restriction studies&lt;/a&gt;, I said that a negative effect is a red flag even if it’s non-significant. In this case I’m not too concerned because:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;The effect on stroke was highly non-significant (p = 0.82). Compare to the NIA calorie restriction study which had p = 0.35 and p = 0.02 for young males and young females respectively.&lt;/li&gt;
        &lt;li&gt;RCTs show omega-3s improve short-term brain function.&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;I think the most reasonable interpretation is that there’s no effect on stroke. &lt;a href=&quot;#fnref:54&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:56&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jacobs, M. N., Covaci, A., Gheorghe, A., &amp;amp; Schepens, P. (2004). &lt;a href=&quot;https://doi.org/10.1021/jf035310q&quot;&gt;Time Trend Investigation of PCBs, PBDEs, and Organochlorine Pesticides in Selected &lt;i&gt;n&lt;/i&gt;−3 Polyunsaturated Fatty Acid Rich Dietary Fish Oil and Vegetable Oil Supplements; Nutritional Relevance for Human Essential &lt;i&gt;n&lt;/i&gt;−3 Fatty Acid Requirements.&lt;/a&gt; &lt;a href=&quot;#fnref:56&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:55&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Fernandes, A. R., Rose, M., White, S., Mortimer, D. N., &amp;amp; Gem, M. (2006). &lt;a href=&quot;https://doi.org/10.1080/02652030600660827&quot;&gt;Dioxins and polychlorinated biphenyls (PCBs) in fish oil dietary supplements: Occurrence and human exposure in the UK.&lt;/a&gt; &lt;a href=&quot;#fnref:55&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:57&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Raab, A., Stiboller, M., Gajdosechova, Z., Nelson, J., &amp;amp; Feldmann, J. (2016). &lt;a href=&quot;https://doi.org/10.1016/j.jfca.2016.09.008&quot;&gt;Element content and daily intake from dietary supplements (nutraceuticals) based on algae, garlic, yeast fish and krill oils—Should consumers be worried?.&lt;/a&gt; &lt;a href=&quot;#fnref:57&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:58&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Winwood, R. J. (2013). &lt;a href=&quot;https://doi.org/10.1533/9780857098863.4.389&quot;&gt;Algal oil as a source of omega-3 fatty acids.&lt;/a&gt; &lt;a href=&quot;#fnref:58&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:31&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;See also Walker’s &lt;a href=&quot;https://sleepdiplomat.wordpress.com/2019/12/19/why-we-sleep-responses-to-questions-from-readers/#sleep_injury&quot;&gt;response&lt;/a&gt; on why he presented the data the way he did. &lt;a href=&quot;#fnref:31&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:29&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For example, the book cites &lt;a href=&quot;https://doi.org/10.1523/JNEUROSCI.5254-14.2015&quot;&gt;Goldstein-Piekarski et al. (2015)&lt;/a&gt;&lt;sup id=&quot;fnref:30&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;90&lt;/a&gt;&lt;/sup&gt; (Walker being a co-author) for the passage “When we are deprived of REM, studies have found, we have a more difficult time reading others’ facial expressions.” I was skeptical of this statement even before knowing Walker co-authored the paper because it has the vibe of the sort of fun quirky result that doesn’t survive the replication crisis. But I don’t particularly care about this claim (it’s not actionable in any way) so I didn’t bother to investigate it.&lt;/p&gt;

      &lt;p&gt;(Also, this is a nitpick but the quoted passage says “studies have found” while only citing a single study.) &lt;a href=&quot;#fnref:29&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Cirelli, C., &amp;amp; Tononi, G. (2008). &lt;a href=&quot;https://doi.org/10.1371/journal.pbio.0060216&quot;&gt;Is Sleep Essential?&lt;/a&gt; &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hirshkowitz, M., Whiton, K., Albert, S. M., Alessi, C., Bruni, O., DonCarlos, L., Hazen, N., Herman, J., Katz, E. S., Kheirandish-Gozal, L., Neubauer, D. N., O’Donnell, A. E., Ohayon, M., Peever, J., Rawding, R., Sachdeva, R. C., Setters, B., Vitiello, M. V., Ware, J. C., &amp;amp; Adams Hillard, P. J. (2015). &lt;a href=&quot;https://doi.org/10.1016/j.sleh.2014.12.010&quot;&gt;National Sleep Foundation’s sleep time duration recommendations: methodology and results summary.&lt;/a&gt; &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;They gave different recommended sleep durations for different age ranges. This table reproduces all the recommendations for individuals age 6 and up (since I assume nobody under 6 is reading this):&lt;/p&gt;

      &lt;table&gt;
        &lt;thead&gt;
          &lt;tr&gt;
            &lt;th&gt;Age&lt;/th&gt;
            &lt;th&gt;Recommended&lt;/th&gt;
            &lt;th&gt;May be appropriate&lt;/th&gt;
          &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
          &lt;tr&gt;
            &lt;td&gt;6–13 y&lt;/td&gt;
            &lt;td&gt;9 to 11&lt;/td&gt;
            &lt;td&gt;7 to 12&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;14–17 y&lt;/td&gt;
            &lt;td&gt;8 to 10&lt;/td&gt;
            &lt;td&gt;7 to 11&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;18–25 y&lt;/td&gt;
            &lt;td&gt;7 to 9&lt;/td&gt;
            &lt;td&gt;6 to 11&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;26–64 y&lt;/td&gt;
            &lt;td&gt;7 to 9&lt;/td&gt;
            &lt;td&gt;6 to 10&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;&amp;gt;64 y&lt;/td&gt;
            &lt;td&gt;7 to 8&lt;/td&gt;
            &lt;td&gt;5 to 9&lt;/td&gt;
          &lt;/tr&gt;
        &lt;/tbody&gt;
      &lt;/table&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:32&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Reutrakul, S., &amp;amp; Van Cauter, E. (2018). &lt;a href=&quot;https://doi.org/10.1016/j.metabol.2018.02.010&quot;&gt;Sleep influences on obesity, insulin resistance, and risk of type 2 diabetes.&lt;/a&gt; &lt;a href=&quot;#fnref:32&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Mendelian_randomization&quot;&gt;Mendelian randomization&lt;/a&gt; is a technique for conducting a natural experiment. Instead of looking at heart attack risk as a function of sleep duration, you look at heart attack risk as a function of &lt;em&gt;genes&lt;/em&gt; that determine sleep duration. The idea is that some confounding environmental variable might cause both shortened sleep and increased heart attack risk, but it can’t change subjects’ genes, so any observed relationship between &lt;em&gt;genetic&lt;/em&gt; sleep duration and heart attack risk is probably causal.&lt;/p&gt;

      &lt;p&gt;I don’t have a strong opinion on how useful this technique is. &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Dashti, H. S., Jones, S. E., Wood, A. R., Lane, J. M., van Hees, V. T., Wang, H., Rhodes, J. A. et al. (2019). &lt;a href=&quot;https://doi.org/10.1038/s41467-019-08917-4&quot;&gt;Genome-wide association study identifies genetic loci for self-reported habitual sleep duration supported by accelerometer-derived estimates.&lt;/a&gt; &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Daghlas, I., Dashti, H. S., Lane, J., Aragam, K. G., Rutter, M. K., Saxena, R., &amp;amp; Vetter, C. (2019). &lt;a href=&quot;https://doi.org/10.1016/j.jacc.2019.07.022&quot;&gt;Sleep Duration and Myocardial Infarction.&lt;/a&gt; &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Zhuang, Z., Gao, M., Yang, R., Li, N., Liu, Z., Cao, W., &amp;amp; Huang, T. (2020). &lt;a href=&quot;https://doi.org/10.1186/s12944-020-01257-z&quot;&gt;Association of physical activity, sedentary behaviours and sleep duration with cardiovascular diseases and lipid profiles: a Mendelian randomization analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Yang, Y., Fan, J., Shi, X., Wang, Y., Yang, C., Lian, J., Wang, N. et al. (2022). &lt;a href=&quot;https://doi.org/10.1002/ehf2.14016&quot;&gt;Causal associations between sleep traits and four cardiac diseases: a Mendelian randomization study.&lt;/a&gt; &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This study found about a 1% increased risk from short sleep, contrasted with Daghlas et al. (2019) which found about a 20% increased risk. &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Liao, L. z., Li, W. d., Liu, Y., Li, J. p., Zhuang, X. d., &amp;amp; Liao, X. x. (2020). &lt;a href=&quot;https://doi.org/10.1016/j.sleep.2019.08.014&quot;&gt;Causal assessment of sleep on coronary heart disease.&lt;/a&gt; &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:46&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lloret, M. A., Cervera-Ferri, A., Nepomuceno, M., Monllor, P., Esteve, D., &amp;amp; Lloret, A. (2020). &lt;a href=&quot;https://doi.org/10.3390/ijms21031168&quot;&gt;Is Sleep Disruption a Cause or Consequence of Alzheimer’s Disease? Reviewing Its Possible Role as a Biomarker.&lt;/a&gt; &lt;a href=&quot;#fnref:46&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:125&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Blondel, N., Berthoin, S., Billat, V., &amp;amp; Lensel, G. (2001). &lt;a href=&quot;http://dx.doi.org/10.1055/s-2001-11357&quot;&gt;Relationship between run times to exhaustion at 90, 100, 120, and 140% of vV O2max and velocity expressed relatively to critical velocity and maximal velocity.&lt;/a&gt; &lt;a href=&quot;#fnref:125&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:34&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Chiu, L. Z. (2023). &lt;a href=&quot;https://doi.org/10.1519/jsc.0000000000004655&quot;&gt;“Knees Out” or “Knees In”? Volitional Lateral vs. Medial Hip Rotation During Barbell Squats.&lt;/a&gt; &lt;a href=&quot;#fnref:34&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:37&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ortega, F. B., Lavie, C. J., &amp;amp; Blair, S. N. (2016). &lt;a href=&quot;https://doi.org/10.1161/circresaha.115.306883&quot;&gt;Obesity and Cardiovascular Disease.&lt;/a&gt; &lt;a href=&quot;#fnref:37&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:36&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Wei, M., Kampert, J. B., Barlow, C. E., Nichaman, M. Z., Gibbons, L. W., Paffenbarger Jr, R. S., &amp;amp; Blair, S. N (1999). &lt;a href=&quot;https://doi.org/10.1001/jama.282.16.1547&quot;&gt;Relationship between low cardiorespiratory fitness and mortality in normal-weight, overweight, and obese men.&lt;/a&gt; &lt;a href=&quot;#fnref:36&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:75&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Arem, H., Moore, S. C., Patel, A., Hartge, P., Berrington de Gonzalez, A., Visvanathan, K., Campbell, P. T. et al. (2015). &lt;a href=&quot;https://doi.org/10.1001/jamainternmed.2015.0533&quot;&gt;Leisure Time Physical Activity and Mortality.&lt;/a&gt; &lt;a href=&quot;#fnref:75&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:73&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lee, D. H., Rezende, L. F. M., Joh, H. K., Keum, N., Ferrari, G., Rey-Lopez, J. P., Rimm, E. B. et al. (2022). &lt;a href=&quot;https://doi.org/10.1161/circulationaha.121.058162&quot;&gt;Long-Term Leisure-Time Physical Activity Intensity and All-Cause and Cause-Specific Mortality: A Prospective Cohort of US Adults.&lt;/a&gt; &lt;a href=&quot;#fnref:73&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:78&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lawler, P. R., Filion, K. B., &amp;amp; Eisenberg, M. J. (2011). &lt;a href=&quot;https://doi.org/10.1016/j.ahj.2011.07.017&quot;&gt;Efficacy of exercise-based cardiac rehabilitation post–myocardial infarction: A systematic review and meta-analysis of randomized controlled trials.&lt;/a&gt; &lt;a href=&quot;#fnref:78&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:79&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Morishita, S., Hamaue, Y., Fukushima, T., Tanaka, T., Fu, J. B., &amp;amp; Nakano, J. (2020). &lt;a href=&quot;https://doi.org/10.1177/1534735420917462&quot;&gt;Effect of Exercise on Mortality and Recurrence in Patients With Cancer: A Systematic Review and Meta-Analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:79&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:81&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Posadzki, P., Pieper, D., Bajpai, R., Makaruk, H., Könsgen, N., Neuhaus, A. L., &amp;amp; Semwal, M. (2020). &lt;a href=&quot;https://doi.org/10.1186/s12889-020-09855-3&quot;&gt;Exercise/physical activity and health outcomes: an overview of Cochrane systematic reviews.&lt;/a&gt; &lt;a href=&quot;#fnref:81&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:30&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Goldstein-Piekarski, A. N., Greer, S. M., Saletin, J. M., &amp;amp; Walker, M. P. (2015). &lt;a href=&quot;https://doi.org/10.1523/jneurosci.5254-14.2015&quot;&gt;Sleep Deprivation Impairs the Human Central and Peripheral Nervous System Discrimination of Social Threat.&lt;/a&gt; &lt;a href=&quot;#fnref:30&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>I Have Whatever the Opposite of a Placebo Effect Is</title>
				<pubDate>Mon, 02 Sep 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/09/02/I_have_the_opposite_of_placebo/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/09/02/I_have_the_opposite_of_placebo/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Two personal stories:&lt;/p&gt;

&lt;h3 id=&quot;a-story-about-caffeine&quot;&gt;A story about caffeine&lt;/h3&gt;

&lt;p&gt;When I first started working a full-time job, I started tracking my daily (subjective) productivity along with a number of variables that I thought might be relevant, like whether I exercised that morning or whether I took caffeine. I couldn’t perceive any differences in productivity based on any of the variables.&lt;/p&gt;

&lt;p&gt;After collecting about a year of data, I ran a regression. I found that most variables had no noticeable effect, but caffeine had a &lt;em&gt;huge&lt;/em&gt; effect—it increased my subjective productivity by about 20 percentage points, or an extra ~1.5 productive hours per day. Somehow I never noticed this enormous effect. Whatever the opposite of a placebo effect is, that’s what I had: caffeine had a large effect, but I thought it had no effect.&lt;/p&gt;

&lt;h3 id=&quot;a-story-about-sleep&quot;&gt;A story about sleep&lt;/h3&gt;

&lt;p&gt;People always say that exercise helps them sleep better. I thought it didn’t work for me. When I do cardio, even like two hours of cardio, I don’t feel more tired in the evening and I don’t fall asleep (noticeably) faster.&lt;/p&gt;

&lt;p&gt;Yesterday, I decided to test this. I wrote a script to predict how long I slept based on how many calories my phone says I burned. The idea is that if I sleep less, that probably means I didn’t need as much because my sleep was higher quality. (I almost always wake up naturally without an alarm.)&lt;/p&gt;

&lt;p&gt;Well, turns out exercise &lt;em&gt;does&lt;/em&gt; help. For every 500 calories burned (which is about what I burn during a normal cardio session), I sleep 25 minutes less. Once again, exercise had a huge effect, and I thought it didn’t do anything.&lt;/p&gt;

&lt;p&gt;I guess I’m not very observant.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Protein Quality (DIAAS) Calculator</title>
				<pubDate>Thu, 29 Aug 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/08/29/DIAAS_calculator/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/08/29/DIAAS_calculator/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Update 2025-01-17: I discovered another protein quality calculator that’s much more comprehensive than mine: &lt;a href=&quot;https://www.diaas-calculator.com/&quot;&gt;https://www.diaas-calculator.com/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You may know that complete proteins are good because they contain every essential amino acid. But you might not know that that’s not the full story.&lt;/p&gt;

&lt;p&gt;Take wheat. Wheat is a complete protein—it contains all nine essential amino acids. But it has a problem. Wheat only contains 27mg of lysine (an essential amino acid) per gram of protein, whereas the Food and Agriculture Organization recommends 48mg of lysine per gram. To make full use of a gram of protein, your body needs to get those 48mg. It doesn’t matter that wheat has lots of other essential amino acids. Once your body uses up all the lysine, it can’t make good use of the other amino acids in wheat protein.&lt;/p&gt;

&lt;p&gt;You can evaluate the protein quality of a food using the &lt;a href=&quot;https://en.wikipedia.org/wiki/Digestible_Indispensable_Amino_Acid_Score&quot;&gt;Digestible Indispensable Amino Acid Score (DIAAS)&lt;/a&gt;. This score determines the quality of a source of protein based on which essential amino acid will run out first, adjusted for digestibility. A score of 100 means the protein has plenty of every essential amino acid.&lt;/p&gt;

&lt;p&gt;Sometimes you can improve the protein quality of your food by mixing different ingredients. Wheat has a DIAAS of 57 because it only has 57% as much lysine per gram as your body needs. Peas have a score of 82 because they don’t have enough methionine + cysteine. But peas have 131% of the lysine requirement, and wheat has 149% of methionine + cysteine, so mix them together and they cover for each other’s weaknesses. A 50/50 mixture of wheat and pea protein has a DIAAS of 94.&lt;/p&gt;

&lt;p&gt;With this calculator, you can determine the DIAAS for mixtures of different protein sources.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;script src=&quot;/scripts/DIAAS.js&quot; defer=&quot;&quot;&gt;&lt;/script&gt;

&lt;form name=&quot;DIAAS&quot;&gt;
    &lt;table style=&quot;width: 200px; margin-left: auto; margin-right: auto; table-layout: fixed&quot;&gt;
    &lt;tr&gt;
        &lt;th&gt;Ingredient&lt;/th&gt;
        &lt;th&gt;Content (%)&lt;/th&gt;
    &lt;/tr&gt;

    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Soy&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;SoyProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Wheat&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;WheatProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Pea&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;PeaProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Fava bean&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;Fava beanProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Hemp&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;HempProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Rice&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;RiceProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Potato&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;PotatoProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Oat&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;OatProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Corn&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;CornProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Rapeseed&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;RapeseedProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Lupin&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;LupinProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Canola&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;CanolaProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
        &lt;tr&gt;
            &lt;td style=&quot;text-align:left&quot;&gt;Whey&lt;/td&gt;
            &lt;td&gt;&lt;input type=&quot;number&quot; id=&quot;WheyProtein&quot; value=&quot;0&quot; min=&quot;0&quot; max=&quot;100&quot; step=&quot;1&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
    
    &lt;/table&gt;

    &lt;p&gt;&lt;input type=&quot;button&quot; class=&quot;button&quot; name=&quot;button&quot; value=&quot;Calculate&quot; onclick=&quot;fillDIAAS()&quot; /&gt;&lt;/p&gt;

    &lt;table width=&quot;400px&quot; style=&quot;border-collapse:collapse; margin-left:auto; margin-right:auto; table-layout:fixed&quot;&gt;
        &lt;tr&gt;
            &lt;td&gt;DIAAS&lt;/td&gt;
            &lt;td&gt;&lt;strong&gt;&lt;div id=&quot;DIAAS&quot;&gt;&lt;/div&gt;&lt;/strong&gt;&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;Limiting amino acid&lt;/td&gt;
            &lt;td&gt;&lt;strong&gt;&lt;div id=&quot;limitingAminoAcid&quot;&gt;&lt;/div&gt;&lt;/strong&gt;&lt;/td&gt;
        &lt;/tr&gt;
    &lt;/table&gt;

    Full amino acid profile: (100 = recommended dose)
    &lt;table&gt;
        &lt;tr&gt;
            &lt;td&gt;Histidine&lt;/td&gt;&lt;td&gt;Isoleucine&lt;/td&gt;&lt;td&gt;Leucine&lt;/td&gt;&lt;td&gt;Lysine&lt;/td&gt;&lt;td&gt;Met + Cys&lt;/td&gt;&lt;td&gt;Phe + Tyr&lt;/td&gt;&lt;td&gt;Threonine&lt;/td&gt;&lt;td&gt;Tryptophan&lt;/td&gt;&lt;td&gt;Valine&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td&gt;&lt;div id=&quot;Histidine&quot;&gt;&lt;/div&gt;&lt;/td&gt;
            &lt;td&gt;&lt;div id=&quot;Isoleucine&quot;&gt;&lt;/div&gt;&lt;/td&gt;
            &lt;td&gt;&lt;div id=&quot;Leucine&quot;&gt;&lt;/div&gt;&lt;/td&gt;
            &lt;td&gt;&lt;div id=&quot;Lysine&quot;&gt;&lt;/div&gt;&lt;/td&gt;
            &lt;td&gt;&lt;div id=&quot;MethionineCysteine&quot;&gt;&lt;/div&gt;&lt;/td&gt;
            &lt;td&gt;&lt;div id=&quot;PhenylalanineTyrosine&quot;&gt;&lt;/div&gt;&lt;/td&gt;
            &lt;td&gt;&lt;div id=&quot;Threonine&quot;&gt;&lt;/div&gt;&lt;/td&gt;
            &lt;td&gt;&lt;div id=&quot;Tryptophan&quot;&gt;&lt;/div&gt;&lt;/td&gt;
            &lt;td&gt;&lt;div id=&quot;Valine&quot;&gt;&lt;/div&gt;&lt;/td&gt;
        &lt;/tr&gt;
    &lt;/table&gt;
&lt;/form&gt;

&lt;h2 id=&quot;good-combinations-of-plant-proteins&quot;&gt;Good combinations of plant proteins&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Soy protein alone has a DIAAS of 102.&lt;/li&gt;
  &lt;li&gt;22% pea + 36% fava bean + 42% hemp has a DIAAS of 96.&lt;/li&gt;
  &lt;li&gt;50% wheat + 50% pea has a DIAAS of 94.&lt;/li&gt;
  &lt;li&gt;28g of wheat protein plus a 500mg &lt;a href=&quot;https://www.amazon.com/NOW-L-Lysine-500-100-Tablets/dp/B000MGOWOC/&quot;&gt;lysine pill&lt;/a&gt; has a DIAAS of 94.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;table-of-amino-acid-profiles&quot;&gt;Table of amino acid profiles&lt;/h2&gt;

&lt;p&gt;Amino acid values are scaled to the reference values for adult amino acid requirements such that a score of 100 matches the reference value.&lt;/p&gt;

&lt;p&gt;Calculated using the amino acid values from &lt;a href=&quot;https://doi.org/10.1002/fsn3.1809&quot;&gt;Herreman et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and the reference values from &lt;a href=&quot;https://www.fao.org/ag/humannutrition/35978-02317b979a686a57aa4593304ffc17f06.pdf&quot;&gt;FAO Expert Consultation (2011)&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Protein source&lt;/th&gt;
      &lt;th&gt;DIAAS&lt;/th&gt;
      &lt;th&gt;Histidine&lt;/th&gt;
      &lt;th&gt;Isoleucine&lt;/th&gt;
      &lt;th&gt;Leucine&lt;/th&gt;
      &lt;th&gt;Lysine&lt;/th&gt;
      &lt;th&gt;Met + Cys&lt;/th&gt;
      &lt;th&gt;Phe + Tyr&lt;/th&gt;
      &lt;th&gt;Threonine&lt;/th&gt;
      &lt;th&gt;Tryptophan&lt;/th&gt;
      &lt;th&gt;Valine&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Soy&lt;/td&gt;
      &lt;td&gt;102&lt;/td&gt;
      &lt;td&gt;149&lt;/td&gt;
      &lt;td&gt;132&lt;/td&gt;
      &lt;td&gt;110&lt;/td&gt;
      &lt;td&gt;114&lt;/td&gt;
      &lt;td&gt;107&lt;/td&gt;
      &lt;td&gt;186&lt;/td&gt;
      &lt;td&gt;130&lt;/td&gt;
      &lt;td&gt;170&lt;/td&gt;
      &lt;td&gt;102&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Wheat&lt;/td&gt;
      &lt;td&gt;57&lt;/td&gt;
      &lt;td&gt;148&lt;/td&gt;
      &lt;td&gt;97&lt;/td&gt;
      &lt;td&gt;94&lt;/td&gt;
      &lt;td&gt;57&lt;/td&gt;
      &lt;td&gt;149&lt;/td&gt;
      &lt;td&gt;138&lt;/td&gt;
      &lt;td&gt;97&lt;/td&gt;
      &lt;td&gt;164&lt;/td&gt;
      &lt;td&gt;99&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Pea&lt;/td&gt;
      &lt;td&gt;82&lt;/td&gt;
      &lt;td&gt;124&lt;/td&gt;
      &lt;td&gt;108&lt;/td&gt;
      &lt;td&gt;94&lt;/td&gt;
      &lt;td&gt;131&lt;/td&gt;
      &lt;td&gt;82&lt;/td&gt;
      &lt;td&gt;147&lt;/td&gt;
      &lt;td&gt;117&lt;/td&gt;
      &lt;td&gt;99&lt;/td&gt;
      &lt;td&gt;89&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Fava bean&lt;/td&gt;
      &lt;td&gt;65&lt;/td&gt;
      &lt;td&gt;135&lt;/td&gt;
      &lt;td&gt;113&lt;/td&gt;
      &lt;td&gt;103&lt;/td&gt;
      &lt;td&gt;113&lt;/td&gt;
      &lt;td&gt;65&lt;/td&gt;
      &lt;td&gt;151&lt;/td&gt;
      &lt;td&gt;113&lt;/td&gt;
      &lt;td&gt;88&lt;/td&gt;
      &lt;td&gt;89&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Hemp&lt;/td&gt;
      &lt;td&gt;64&lt;/td&gt;
      &lt;td&gt;155&lt;/td&gt;
      &lt;td&gt;113&lt;/td&gt;
      &lt;td&gt;92&lt;/td&gt;
      &lt;td&gt;64&lt;/td&gt;
      &lt;td&gt;142&lt;/td&gt;
      &lt;td&gt;166&lt;/td&gt;
      &lt;td&gt;108&lt;/td&gt;
      &lt;td&gt;129&lt;/td&gt;
      &lt;td&gt;106&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Corn&lt;/td&gt;
      &lt;td&gt;43&lt;/td&gt;
      &lt;td&gt;138&lt;/td&gt;
      &lt;td&gt;96&lt;/td&gt;
      &lt;td&gt;175&lt;/td&gt;
      &lt;td&gt;43&lt;/td&gt;
      &lt;td&gt;148&lt;/td&gt;
      &lt;td&gt;178&lt;/td&gt;
      &lt;td&gt;107&lt;/td&gt;
      &lt;td&gt;67&lt;/td&gt;
      &lt;td&gt;97&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Rice&lt;/td&gt;
      &lt;td&gt;56&lt;/td&gt;
      &lt;td&gt;116&lt;/td&gt;
      &lt;td&gt;95&lt;/td&gt;
      &lt;td&gt;87&lt;/td&gt;
      &lt;td&gt;56&lt;/td&gt;
      &lt;td&gt;122&lt;/td&gt;
      &lt;td&gt;151&lt;/td&gt;
      &lt;td&gt;93&lt;/td&gt;
      &lt;td&gt;147&lt;/td&gt;
      &lt;td&gt;102&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Potato&lt;/td&gt;
      &lt;td&gt;125&lt;/td&gt;
      &lt;td&gt;125&lt;/td&gt;
      &lt;td&gt;166&lt;/td&gt;
      &lt;td&gt;155&lt;/td&gt;
      &lt;td&gt;145&lt;/td&gt;
      &lt;td&gt;135&lt;/td&gt;
      &lt;td&gt;266&lt;/td&gt;
      &lt;td&gt;205&lt;/td&gt;
      &lt;td&gt;165&lt;/td&gt;
      &lt;td&gt;148&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Oat&lt;/td&gt;
      &lt;td&gt;68&lt;/td&gt;
      &lt;td&gt;114&lt;/td&gt;
      &lt;td&gt;107&lt;/td&gt;
      &lt;td&gt;102&lt;/td&gt;
      &lt;td&gt;68&lt;/td&gt;
      &lt;td&gt;177&lt;/td&gt;
      &lt;td&gt;171&lt;/td&gt;
      &lt;td&gt;105&lt;/td&gt;
      &lt;td&gt;142&lt;/td&gt;
      &lt;td&gt;110&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Rapeseed&lt;/td&gt;
      &lt;td&gt;80&lt;/td&gt;
      &lt;td&gt;134&lt;/td&gt;
      &lt;td&gt;96&lt;/td&gt;
      &lt;td&gt;84&lt;/td&gt;
      &lt;td&gt;80&lt;/td&gt;
      &lt;td&gt;147&lt;/td&gt;
      &lt;td&gt;117&lt;/td&gt;
      &lt;td&gt;120&lt;/td&gt;
      &lt;td&gt;137&lt;/td&gt;
      &lt;td&gt;99&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Lupin&lt;/td&gt;
      &lt;td&gt;80&lt;/td&gt;
      &lt;td&gt;151&lt;/td&gt;
      &lt;td&gt;111&lt;/td&gt;
      &lt;td&gt;96&lt;/td&gt;
      &lt;td&gt;89&lt;/td&gt;
      &lt;td&gt;80&lt;/td&gt;
      &lt;td&gt;153&lt;/td&gt;
      &lt;td&gt;120&lt;/td&gt;
      &lt;td&gt;93&lt;/td&gt;
      &lt;td&gt;84&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Canola&lt;/td&gt;
      &lt;td&gt;85&lt;/td&gt;
      &lt;td&gt;131&lt;/td&gt;
      &lt;td&gt;99&lt;/td&gt;
      &lt;td&gt;85&lt;/td&gt;
      &lt;td&gt;86&lt;/td&gt;
      &lt;td&gt;142&lt;/td&gt;
      &lt;td&gt;123&lt;/td&gt;
      &lt;td&gt;120&lt;/td&gt;
      &lt;td&gt;144&lt;/td&gt;
      &lt;td&gt;94&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Whey&lt;/td&gt;
      &lt;td&gt;106&lt;/td&gt;
      &lt;td&gt;106&lt;/td&gt;
      &lt;td&gt;177&lt;/td&gt;
      &lt;td&gt;149&lt;/td&gt;
      &lt;td&gt;156&lt;/td&gt;
      &lt;td&gt;155&lt;/td&gt;
      &lt;td&gt;128&lt;/td&gt;
      &lt;td&gt;216&lt;/td&gt;
      &lt;td&gt;232&lt;/td&gt;
      &lt;td&gt;125&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h1 id=&quot;references&quot;&gt;References&lt;/h1&gt;


&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;FAO Expert Consultation (2011). &lt;a href=&quot;https://www.fao.org/ag/humannutrition/35978-02317b979a686a57aa4593304ffc17f06.pdf&quot;&gt;Dietary protein quality evaluation in human nutrition.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Herreman, L., Nommensen, P., Pennings, B., &amp;amp; Laus, M. C. (2020). &lt;a href=&quot;https://doi.org/10.1002/fsn3.1809&quot;&gt;Comprehensive overview of the quality of plant- and animal-sourced proteins based on the digestible indispensable amino acid score.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Just Because a Number Is a Rounding Error Doesn't Mean It's Not Important</title>
				<pubDate>Fri, 02 Aug 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/08/02/rounding_error_can_be_important/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/08/02/rounding_error_can_be_important/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Sometimes, people call a number a “rounding error” as if to say it doesn’t matter. But a rounding error can still be very important!&lt;/p&gt;

&lt;p&gt;Say I’m tracking my weight. If I’ve put on 0.1 pounds since yesterday, that’s a rounding error—my weight fluctuates by 3 pounds on a day-to-day basis, so 0.1 pounds means nothing. But if I continue gaining 0.1 pounds per day, I’ll be obese after 18 months, and by the time I’m 70 I’ll be the fattest person who ever lived.&lt;/p&gt;

&lt;p&gt;Or if the stock market moves 1% in a day, that’s a rounding error. If it moves up 1% every day for a year, every individual day of which is a rounding error, it will be up 3700%, which would be the craziest thing that’s ever happened in the history of the global economy.&lt;/p&gt;

&lt;p&gt;This happens whenever the standard deviation is much larger than the mean. A large standard deviation means a “real” change gets obscured by random movement. But over enough iterations, the random movements even out and the real changes persist. For example, the stock market has an average daily return of 0.02% and a standard deviation of 0.8%. The standard deviation is 40x larger than the mean, so a real trend in prices gets totally washed out by noise. The market’s daily average return is a rounding error, but it’s still important.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>A 401(k) Sometimes Isn't Worth It</title>
				<pubDate>Wed, 24 Jul 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/07/24/401k_fees/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/07/24/401k_fees/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;You don’t always save money by putting your investments into a 401(k).&lt;/p&gt;

&lt;p&gt;When you invest money inside a 401(k), you don’t have to pay taxes on any returns earned by your investments. But you also have to pay a fee to your 401(k) provider.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If you buy and hold index funds in a taxable account, you don’t have to pay any capital gains tax on price increases until you sell.&lt;/li&gt;
  &lt;li&gt;In a 401(k), the annual fee adds up every year and may eventually exceed the tax savings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the taxes cap out at the capital gains tax rate (15% or 20% depending on your tax bracket),&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; whereas the expenses of a 401(k) continue to accumulate.&lt;/p&gt;

&lt;p&gt;However, in a taxable account, you do still have to pay taxes on dividends (and bond payouts) every year, and those taxes might cost you more than the 401(k) fees.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Below is a calculator to determine how many years before the 401(k) fees exceed the tax savings, if ever.&lt;/p&gt;

&lt;script src=&quot;/scripts/401k.js&quot; defer=&quot;&quot;&gt;&lt;/script&gt;

&lt;form name=&quot;401k&quot;&gt;
    &lt;table&gt;
        &lt;tr&gt;
            &lt;td style=&quot;text-align:right&quot;&gt;employer matching (%)&lt;/td&gt;
            &lt;td&gt;&lt;input style=&quot;width:80px&quot; type=&quot;number&quot; step=&quot;50&quot; id=&quot;matching&quot; value=&quot;0&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td style=&quot;text-align:right&quot;&gt;total investment return including dividends (nominal) (%)&lt;/td&gt;
            &lt;td&gt;&lt;input style=&quot;width:80px&quot; type=&quot;number&quot; step=&quot;0.5&quot; id=&quot;marketReturn&quot; value=&quot;8&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td style=&quot;text-align:right&quot;&gt;dividend yield (%)&lt;/td&gt;
            &lt;td&gt;&lt;input style=&quot;width:80px&quot; type=&quot;number&quot; step=&quot;0.5&quot; id=&quot;divYield&quot; value=&quot;2&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td style=&quot;text-align:right&quot;&gt;401(k) fee (%)&lt;/td&gt;
            &lt;td&gt;&lt;input style=&quot;width:80px&quot; type=&quot;number&quot; step=&quot;0.05&quot; id=&quot;fee&quot; value=&quot;0.5&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td style=&quot;text-align:right&quot;&gt;capital gains tax rate (%)&lt;/td&gt;
            &lt;td&gt;&lt;input style=&quot;width:80px&quot; type=&quot;number&quot; step=&quot;1&quot; id=&quot;cgTax&quot; value=&quot;15&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td style=&quot;text-align:right&quot;&gt;income tax rate today (%)&lt;/td&gt;
            &lt;td&gt;&lt;input style=&quot;width:80px&quot; type=&quot;number&quot; step=&quot;1&quot; id=&quot;incomeTaxToday&quot; value=&quot;24&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;td style=&quot;text-align:right&quot;&gt;income tax rate in retirement (%)&lt;/td&gt;
            &lt;td&gt;&lt;input style=&quot;width:80px&quot; type=&quot;number&quot; step=&quot;1&quot; id=&quot;incomeTaxInRetirement&quot; value=&quot;24&quot; /&gt;&lt;/td&gt;
        &lt;/tr&gt;
     &lt;/table&gt;

    &lt;p&gt;&lt;input type=&quot;button&quot; class=&quot;button&quot; name=&quot;button&quot; value=&quot;Calculate&quot; onclick=&quot;htmlBreakEventPoint()&quot; /&gt;&lt;/p&gt;

    &lt;table style=&quot;font-size:1em&quot;&gt;
        &lt;tr&gt;
            &lt;td&gt;A 401(k) falls behind a taxable account after:&lt;/td&gt;
            &lt;td&gt;&lt;strong&gt;&lt;div id=&quot;breakEven&quot;&gt;&lt;/div&gt;&lt;/strong&gt;&lt;/td&gt;
        &lt;/tr&gt;
    &lt;/table&gt;

    &lt;br /&gt;

&lt;/form&gt;

&lt;p&gt;This calculator assumes you buy index funds and hold them forever. If you trade stocks within a taxable account, you have to pay taxes every time you make a trade.&lt;/p&gt;

&lt;p&gt;Something else to consider: If you quit your job, your old employer’s 401(k) provider will let you roll your 401(k) into an IRA. You don’t have to pay any fees on an IRA.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; So even if the 401(k) fees exceed the tax benefits after (say) 30 years, that’s not a problem if you expect to quit your job after less than 30 years. Realistically, few people stay at one job for so long that the 401(k) fees exceed the tax savings.&lt;/p&gt;

&lt;p&gt;(If you change jobs, usually you can roll your old 401(k) into your new 401(k), but I wouldn’t do that because it means you have to keep paying 401(k) fees. It’s almost always better to roll your old 401(k) into an IRA.)&lt;/p&gt;

&lt;h2 id=&quot;notes&quot;&gt;Notes&lt;/h2&gt;


&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The capital gains tax will always be less than 15%/20% of your account value (depending on which tax bracket you’re in), but it converges on 15%/20% as the value approaches infinity.&lt;/p&gt;

      &lt;p&gt;Example: If you invest $100 in an index fund and you sell when the price reaches $101, you have to pay 20% of $1 (assuming you’re in the 20% tax bracket), which is only 0.2% of the total value. If you sell when the price reaches $1 million, you have to pay 20% of $999,900, which is 19.998% of the total value. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;H/T &lt;a href=&quot;https://www.benkuhn.net/&quot;&gt;Ben Kuhn&lt;/a&gt; for raising this possibility. I’m sure someone somewhere had considered it before him, but I’ve never seen anyone else bring it up, and standard financial advice ignores it. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Other than ETF/mutual fund fees, but you have to pay those no matter what. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Continuing My Caffeine Self-Experiment</title>
				<pubDate>Mon, 24 Jun 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/06/24/continuing_caffeine_self_experiment/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/06/24/continuing_caffeine_self_experiment/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;I did another &lt;a href=&quot;https://mdickens.me/2024/04/11/caffeine_self_experiment/&quot;&gt;caffeine experiment on myself&lt;/a&gt;. This time I tested if I could have caffeine 4 days a week without getting habituated.&lt;/p&gt;

&lt;p&gt;Last time, when I took caffeine 3 days a week, I didn’t get habituated but the results were weird. This time, with the more frequent dose, I still didn’t get habituated, and the results were weird again!&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#experimental-procedure&quot; id=&quot;markdown-toc-experimental-procedure&quot;&gt;Experimental procedure&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#results&quot; id=&quot;markdown-toc-results&quot;&gt;Results&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#when-i-take-caffeine-3-days-in-a-row-do-i-habituate-by-the-3rd-day&quot; id=&quot;markdown-toc-when-i-take-caffeine-3-days-in-a-row-do-i-habituate-by-the-3rd-day&quot;&gt;When I take caffeine 3 days in a row, do I habituate by the 3rd day?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;In April, I &lt;a href=&quot;https://mdickens.me/2024/04/11/caffeine_self_experiment/&quot;&gt;published the results of a self-experiment&lt;/a&gt; on caffeine cycling. I drank coffee 3 days a week for 6 weeks&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and tested my reaction time to see if I would become habituated to caffeine. If I do become habituated, my reaction time should get worse over the 6 weeks. It didn’t get worse, and in fact it got better, for unclear reasons.&lt;/p&gt;

&lt;p&gt;So my experiment showed that I didn’t become (detectably) habituated to caffeine when taking it 3 days a week. I ran a second experiment to see what happens if I up the dosage frequency to 4 days a week. Do I start to become habituated? Or can I get away with it?&lt;/p&gt;

&lt;p&gt;Turns out, I can get away with it. The results from the 4-day-a-week experiment show no signs of habituation.&lt;/p&gt;

&lt;p&gt;In fact, like &lt;a href=&quot;https://mdickens.me/2024/04/11/caffeine_self_experiment/#experimental-phase&quot;&gt;last time&lt;/a&gt;, my reaction time got (slightly, non-significantly) &lt;em&gt;better&lt;/em&gt; over the course of the experiment.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;experimental-procedure&quot;&gt;Experimental procedure&lt;/h2&gt;

&lt;p&gt;I followed the procedure of phase 4 as described in my &lt;a href=&quot;https://mdickens.me/2024/03/02/caffeine_tolerance/#appendix-b-pre-registration-for-a-caffeine-self-experiment&quot;&gt;pre-registration&lt;/a&gt;—it’s the same as phase 3, except that I drank coffee 4 days a week instead of 3. Specifically, I had caffeine on Monday, Wednesday, Thursday, and Friday. I ran the experiment for six weeks.&lt;/p&gt;

&lt;p&gt;A quick review of the experimental procedure:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Take caffeine on Mon/Wed/Thu/Fri.&lt;/li&gt;
  &lt;li&gt;Test reaction time without caffeine every morning, and test again an hour after caffeine on caffeine days.&lt;/li&gt;
  &lt;li&gt;Look at the slope of reaction time over 6 weeks. If post-caffeine reaction time got worse, that means I became habituated. If no-caffeine reaction time got worse, that means I developed withdrawal symptoms.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I had caffeine three days in a row Wed/Thu/Fri so that I could test a secondary hypothesis: does caffeine become less effective by the third day? Pre-existing research suggests that habituation starts to appear as early as day 3. I might see a small habituation by Friday which then dissipates over the weekend.&lt;/p&gt;

&lt;h2 id=&quot;results&quot;&gt;Results&lt;/h2&gt;

&lt;p&gt;My reaction time slightly improved over the course of the six weeks.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-experimental2-regression.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Controlling for sleep quality (as measured by number of hours in bed) did not change the slope at all (to two significant figures).&lt;/p&gt;

&lt;p&gt;My reaction time also improved during the &lt;a href=&quot;https://mdickens.me/2024/04/11/caffeine_self_experiment/#experimental-phase&quot;&gt;previous experimental phase&lt;/a&gt;. But it didn’t improve over both phases combined. Looking at both experimental periods together (including the abstinence period in between), the slope is nearly flat (caffeine slope 0.01, no-caffeine slope –0.02).&lt;/p&gt;

&lt;p&gt;As &lt;a href=&quot;https://mdickens.me/2024/04/11/caffeine_self_experiment/#what-explains-these-results&quot;&gt;before&lt;/a&gt;, I don’t know how to explain why my reaction time improved within each experimental phase. My best guess is there’s some random-ish process that produces long-run trends in reaction time. Perhaps it’s the result of variations in sleep quality, but a type of sleep quality that time-in-bed can’t measure.&lt;/p&gt;

&lt;p&gt;But it looks like I didn’t get habituated when taking caffeine 4 days a week—or, at least, not to a detectable degree. So I’m going to keep taking caffeine 4 days a week.&lt;/p&gt;

&lt;h2 id=&quot;when-i-take-caffeine-3-days-in-a-row-do-i-habituate-by-the-3rd-day&quot;&gt;When I take caffeine 3 days in a row, do I habituate by the 3rd day?&lt;/h2&gt;

&lt;p&gt;The evidence suggests that I don’t, but the evidence is weak.&lt;/p&gt;

&lt;p&gt;I don’t have great data because I only collected five data points (it was supposed to be six, one for each week of the experiment, but I got sick on week five which messed up my caffeine schedule&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;On average, my reaction time was 278.8 ms on the first day and 273.0 ms on the third day (so reaction time got better, not worse). The difference had standard error 8.4 ms (p = 0.5). This weakly suggests that I don’t start getting habituated yet by the 3rd day, but my test was underpowered. (I’d only expect reaction time to get worse by maybe 3–5 ms, and the standard error was 8.4 ms. That’s an odds ratio of 1.15:1 between 0 ms and 4 ms of habituation.)&lt;/p&gt;

&lt;p&gt;I’d like to compare this to the pre-existing literature, but to my knowledge, no studies have ever administered daily caffeine to non-habituated users and measured the daily habituation curve. &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6343867/&quot;&gt;Lara et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; comes the closest: it didn’t measure performance every day, but it did have participants take caffeine for 20 days and measured athletic performance on day 1 and day 4. It found a slight decrease in performance between days 1 and 4, with the exact number varying from 1% to 3% depending on the metric used. Some research on rats (see the studies cited &lt;a href=&quot;/2024/03/02/caffeine_tolerance/#experimental-evidence-on-intermittent-dosing&quot;&gt;here&lt;/a&gt;) found that performance slightly decreased from day 1 to day 3, but rat metabolism runs faster than humans’ so on priors I’d expect humans to become habituated more slowly.&lt;/p&gt;

&lt;p&gt;In conclusion, I don’t really know anything, but I’m gonna keep taking caffeine 4 days a week.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source code and data for this experiment are available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/tree/master/caffeine&quot;&gt;on GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;p&gt;I bring this up because isn’t it weird that that adding one extra week changed the p-value for no-caffeine tests from &amp;lt; 0.001 to 0.15?&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In the &lt;a href=&quot;https://mdickens.me/2024/04/11/caffeine_self_experiment/&quot;&gt;original post&lt;/a&gt;, I included four weeks of data, but I continued the experiment for two weeks beyond what I had originally planned. The extra two weeks caused the regression lines to flatten out, but did not change their direction. This supports my hypothesis that the downward-sloping regressions were the result of some sort of anomaly. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I looked at the results at the end of week 5 because I thought I might need to end the experiment early (turns out I didn’t&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;), and I saw a much stronger downward slope: p = 0.029 for slope on post-caffeine tests, and p &amp;lt; 0.001 for slope on no-caffeine tests. But I performed badly enough in week six that the slope largely flattened out. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I still took caffeine 4 days that week, so the overall experiment didn’t get messed up. But I changed which days I took caffeine, so I didn’t get 3 days in a row. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Beatriz Lara, Carlos Ruiz-Moreno, Juan Jose Salinero &amp;amp; Juan Del Coso (2019). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6343867/&quot;&gt;Time course of tolerance to the performance benefits of caffeine.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I had a problem with my GPU driver that increased the latency on my monitor, which was going to mess up the results. But I fixed the problem after a couple days so I just skipped those days, and I didn’t have to skip any of the important days. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Some Things I've Changed My Mind On</title>
				<pubDate>Thu, 23 May 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/05/23/some_things_ive_changed_my_mind_on/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/05/23/some_things_ive_changed_my_mind_on/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Here are some things I’ve changed my mind about. Most of the changes are recent (because I can remember recent stuff more easily) but some of them happened 5+ years ago.&lt;/p&gt;

&lt;p&gt;I’m a little nervous about writing this because a few of my old beliefs were really dumb. But I don’t think it would be fair to include only my smart beliefs.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#effective-altruism&quot; id=&quot;markdown-toc-effective-altruism&quot;&gt;Effective altruism&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#finance&quot; id=&quot;markdown-toc-finance&quot;&gt;Finance&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#health-and-fitness&quot; id=&quot;markdown-toc-health-and-fitness&quot;&gt;Health and fitness&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#miscellaneous&quot; id=&quot;markdown-toc-miscellaneous&quot;&gt;Miscellaneous&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-patterns-emerge&quot; id=&quot;markdown-toc-what-patterns-emerge&quot;&gt;What patterns emerge?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;effective-altruism&quot;&gt;Effective altruism&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; In my original 2021 version of &lt;a href=&quot;https://mdickens.me/2021/04/05/comparison_of_DAF_providers/&quot;&gt;A Comparison of Donor-Advised Fund Providers&lt;/a&gt;, I recommended Schwab Charitable as the best DAF provider for most people.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; When I reviewed the post this year, I noticed Schwab’s default fund fees are too high, so it was a bad idea to recommend them. I don’t recall exactly what I thought about the default fund fees when I first wrote the article, perhaps I noticed the high fees and thought it didn’t matter because people can switch to cheaper funds. If I did think that, then that was a mistake because a large proportion of people will stick with the default option without looking at it, and if they do that with Schwab, they’ll get ripped off.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; After writing &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/&quot;&gt;Uncorrelated Investments for Altruists&lt;/a&gt;, I thought that the marginal donor’s philanthropic investment portfolio should aim for near zero correlation to equities.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; In the process of writing &lt;a href=&quot;https://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints/&quot;&gt;Asset Allocation and Leverage for Altruists with Constraints&lt;/a&gt;, I wrote &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/mvo.py&quot;&gt;code&lt;/a&gt; to do portfolio optimization and ran it under various assumptions. I found results that contradicted my previous belief. Now I believe that the optimal marginal investment portfolio should still have some correlation to equities because getting the extra expected return is worth accepting some positive correlation.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; In 2015 I donated to Raising for Effective Giving and &lt;a href=&quot;https://mdickens.me/2015/09/15/my_cause_selection/&quot;&gt;argued&lt;/a&gt; for why they were my favorite donation target.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; After 2015, their fundraising model didn’t keep working as well as I expected.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; &lt;a href=&quot;https://mdickens.me/2021/07/21/metaculus_learning_value/&quot;&gt;Metaculus Questions Suggest Money Will Do More Good in the Future&lt;/a&gt;&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; After I published that post, some commenters argued for a different interpretation of the Metaculus questions.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;finance&quot;&gt;Finance&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; SBF and Alameda have skill at beating the market.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I’m sure you know what changed my mind. There’s some chance that they did actually have skill and they blew up due to bad luck (them committing fraud was bad behavior, not bad luck, but as I understand it, they blew up because they lost a bunch of money, and they might have gotten away with the fraud if they’d made money). But I now believe it’s more likely that the risks they took were not calculated and they didn’t have much skill. (Clearly SBF had a lot of skill at fundraising, but that’s not the same thing as trading skill.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; In 2021 and earlier, I estimated future market returns using Research Affiliates’ &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation&quot;&gt;model&lt;/a&gt; (e.g. &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#return-expectations&quot;&gt;here&lt;/a&gt;), which assumes market valuations mean-revert after 10 years.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I read AQR’s capital market assumptions (see e.g. &lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/2024-Capital-Market-Assumptions-for-Major-Asset-Classes&quot;&gt;their 2024 publication&lt;/a&gt;) where they argue that there’s no strong reason to expect valuations to mean revert. Now I prefer the AQR model which uses the traditional “yield + growth” approach with no consideration for valuation. I still believe valuations ought to mean revert, but it could easily take more than 10 years, and they might only revert somewhat, so I think it’s reasonable to take an average of the AQR and Research Affiliates projections.&lt;/p&gt;

    &lt;p&gt;Putting less consideration on mean reversion makes equity market return projections cluster closer together. I would still order expected equity returns as emerging markets &amp;gt; developed markets ex-US &amp;gt; US, but I do not expect the differences to be as big as I used to. I used to quote something like a 6% real return for emerging markets and 0% for the US market. Now I expect more like 5% for emerging markets and 2% for the US.&lt;/p&gt;

    &lt;p&gt;I wrote more about my updated expectations &lt;a href=&quot;https://mdickens.me/2022/04/01/how_I_estimate_future_investment_returns/&quot;&gt;here&lt;/a&gt;. I don’t pay too much attention to return projections, so changing my mind on this didn’t change my investment strategy.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; From a purely financial perspective (ignoring personal taste etc.), renting a house is always a better decision than buying.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; Around 2020, I read the argument that owning a house works as a hedge against future housing expenditures, which means buying is better than renting in many cases.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

    &lt;p&gt;I still believe most discourse on renting vs. buying is confused. For example:&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;It doesn’t make any sense to directly compare monthly rent to monthly mortgage payments because with a mortgage, you’re accumulating equity in an asset.&lt;/li&gt;
      &lt;li&gt;Maybe people understand point 1 and believe a mortgage is always better because renting is “throwing money away”. That’s also wrong because you have to account for the time value of money. Dumping most of your net worth into a house has a huge opportunity cost.&lt;/li&gt;
      &lt;li&gt;When you account for opportunity costs, you have to consider the risk of a mortgage vs. the risk of the counterfactual investment (e.g., an index fund).&lt;/li&gt;
    &lt;/ol&gt;

    &lt;p&gt;I looked through a dozen online “rent vs. buy” calculators, only three of them properly accounted for equity and opportunity costs (&lt;a href=&quot;https://www.financialmentor.com/calculator/rent-vs-buy-calculator&quot;&gt;Financial Mentor&lt;/a&gt;, &lt;a href=&quot;https://www.fool.com/calculators/should-i-buy-or-rent-lets-crunch-the-numbers.aspx&quot;&gt;Motley Fool&lt;/a&gt;, and &lt;a href=&quot;https://www.nytimes.com/interactive/2024/upshot/buy-rent-calculator.html&quot;&gt;New York Times&lt;/a&gt; (paywalled)), and none of them accounted for risk. (The three good calculators use different methods but they’re basically interchangeable.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; In 2016 when I was looking for a full-time job at a startup, I evaluated equity compensation at face value.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I should have considered risk. Equity compensation is risky (2–4x riskier than an index fund), which makes it look much worse.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind again:&lt;/strong&gt; Later, I realized there are some other factors that make equity compensation look better. In 2021 I did a more in-depth analysis &lt;a href=&quot;https://mdickens.me/2021/11/12/ea_work_at_startups/&quot;&gt;here&lt;/a&gt;, and my current opinion is that equity in a good startup is worth more than its face value.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; In 2013 I read some research on investing strategies like &lt;a href=&quot;https://en.wikipedia.org/wiki/Magic_formula_investing&quot;&gt;Greenblatt’s magic formula&lt;/a&gt; and thought they sounded like a great idea.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; Actually, I still believe Greenblatt-esque strategies are a great idea (at least for some people in some contexts). But I believe I over-updated on the evidence I had at the time, I just got lucky that I was looking at weak evidence for true claims.&lt;/p&gt;

    &lt;p&gt;I originally became convinced that Greenblatt’s magic formula worked when I read Abbey &amp;amp; Larkin (2012), “Can simple one and two-factor investing strategies capture the value premium?” The paper looked at US stocks over a 30-year period. Now, I would want to see more evidence than that. I like the five criteria given by Berkin &amp;amp; Swedroe’s &lt;a href=&quot;https://www.amazon.com/Your-Complete-Guide-Factor-Based-Investing/dp/0692783652&quot;&gt;Your Complete Guide to Factor-Based Investing&lt;/a&gt;: to take a market anomaly seriously, it must be (1) &lt;strong&gt;persistent&lt;/strong&gt; across time, (2) &lt;strong&gt;pervasive&lt;/strong&gt; across markets, (3) &lt;strong&gt;robust&lt;/strong&gt; to different formulations, (4), &lt;strong&gt;investable&lt;/strong&gt;, and (5) have a risk-based or behavioral &lt;strong&gt;explanation&lt;/strong&gt;. The evidence from Abbey &amp;amp; Larkin (2012) established robustness and half-established persistence (30 years is decently long so I give it half credit), but didn’t address the other three and a half criteria.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;health-and-fitness&quot;&gt;Health and fitness&lt;/h2&gt;

&lt;p&gt;(I’ve been thinking a lot about health and fitness lately.)&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; In 2011 I quit coffee cold turkey after reading &lt;em&gt;You Are Not So Smart&lt;/em&gt;’s article &lt;a href=&quot;https://youarenotsosmart.com/2010/02/22/coffee/&quot;&gt;Coffee&lt;/a&gt;. I believed that caffeine had no effect on a daily user except to reverse withdrawal symptoms.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; In &lt;a href=&quot;https://mdickens.me/2024/03/29/does_caffeine_stop_working/&quot;&gt;Does Caffeine Stop Working?&lt;/a&gt;, I investigated more deeply and now I believe that caffeine retains something like half its initial benefit.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; I can trust the research on stuff like caffeine.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I read some caffeine studies and found that most of them were pretty bad. And not bad in the obvious way of having small sample sizes (which is honestly fine as long as you’re aware of it—weak evidence is still evidence). They were bad in the sense of “your study’s methodology is not capable even in principle of providing evidence for or against your hypothesis”. The &lt;em&gt;majority&lt;/em&gt; of studies were like that (around 75% of them, if I remember correctly).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; After reading &lt;a href=&quot;https://www.amazon.com/Starting-Strength-Mark-Rippetoe-ebook/dp/B006XJR5ZA/&quot;&gt;Starting Strength&lt;/a&gt; in 2014,&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; I thought Starting Strength-style training was the best in every situation, and bodybuilding-style isolation movements were dumb.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; After a few months, I realized I had been on the “hill of novice overconfidence” and actually isolation exercises are fine. Later I re-read &lt;em&gt;Starting Strength&lt;/em&gt; and realized it never even said you shouldn’t do isolation training. It made the more nuanced claims that (1) isolation training is not ideal for developing strength and (2) compound barbell movements are better for beginners.&lt;/p&gt;

    &lt;p&gt;After learning more about the scientific literature and the diversity in how elite athletes train, now I tend to believe differences in training mostly don’t matter. You can get good results with any method as long as you lift heavy weights, increase the weight over time, and get sufficient food and rest.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; High-intensity interval training is the best kind of cardio.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I started reading some experts on exercise science, and they say low intensity cardio is just as good most of the time, and the ideal routine consists of something like 80% easy cardio and 20% hard cardio (and the easy cardio should be really easy&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

    &lt;p&gt;And thank God for that because I kinda hate doing moderate/hard cardio. I’ve started being way more consistent about aerobic exercise—I go for brisk hilly walks 3 times a week and haven’t missed a day in months.&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; And my resting heart rate has gone down from 70–75 bpm a few years ago to 55–58 bpm today, so I guess it’s working. I’ve also noticed I can do high-rep squats and deadlifts without getting winded. I remember in 2016 I deadlifted 225 for 10 reps and I basically died (my heart rate hit 190 bpm). A few weeks ago I deadlifted 315 for 11 (note: my 1-rep max has barely changed since 2016) and I felt fine (heart rate 144 bpm).&lt;/p&gt;

    &lt;p&gt;(I’ve read hardly any original research on cardio, but as I understand, the older research did show that HIIT was better than low-intensity cardio, and newer research changed that—see the 6-part series &lt;a href=&quot;https://x.com/Ekkekakis/status/1689692611018129408&quot;&gt;Extraordinary Claims in the Literature on High-Intensity Interval Training&lt;/a&gt; by Ekkekakis et al. (2023).&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; So my beliefs basically tracked the research findings, although I was getting all my info second-hand.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; At different times, I believed (1) we don’t really know anything about nutrition; (2) food choice doesn’t matter as long as you don’t over-eat.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I read &lt;a href=&quot;https://nutritionfacts.org/book/how-not-to-die/&quot;&gt;How Not To Die&lt;/a&gt; in 2017, which referenced a large quantity of nutrition research that contradicted my previous beliefs. I now believe the book was wrong about some things (which I will discuss in the next line item), but it was more correct than my pre-2017 self. Later on, I read a wider variety of evidence-based nutrition advice.&lt;/p&gt;

    &lt;p&gt;My current position is that we are really pretty sure about some things in nutrition, and some foods are unhealthy even if you don’t over-eat. I believe conventional nutrition advice in educated circles is basically correct: trans fat, saturated fat, and added sugar are bad; processed food is generally bad; whole plant foods (especially fruits and veggies) are good.&lt;/p&gt;

    &lt;p&gt;(I still don’t have a great sense of the distinction between foods that make it easy to overeat and foods that are unhealthy at any bodyweight. Like I know that sugar in small quantities isn’t bad for healthy-weight individuals, but is that because the badness is too minor to detect, or because there’s some threshold below which sugar causes zero harm whatsoever?)&lt;/p&gt;

    &lt;p&gt;I updated my beliefs by following my “web of trust”: my layman friend trusts this dietitian; my other layman friend trusts this medical doctor who agrees with the first dietitian about most things; I trust Scott Alexander, and he likes this one &lt;a href=&quot;https://www.stephanguyenet.com/&quot;&gt;researcher&lt;/a&gt;, who endorsed this &lt;a href=&quot;https://www.redpenreviews.org/reviews/eat-drink-and-be-healthy/&quot;&gt;book&lt;/a&gt;; these two &lt;a href=&quot;https://www.barbellmedicine.com/&quot;&gt;guys&lt;/a&gt; know a lot about strength training and I like their epistemology,&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; so they probably also know a thing or two about nutrition, and they agree with the guy Scott liked; etc.&lt;/p&gt;

    &lt;p&gt;According to my web of trust, the best book on nutrition is &lt;a href=&quot;https://mdickens.me/2024/05/23/notes_on_eat_drink_and_be_healthy/&quot;&gt;Eat, Drink, and Be Healthy&lt;/a&gt;, which as far as I know is the only book that makes a 100% earnest effort to represent the state of nutrition science.&lt;/p&gt;

    &lt;p&gt;I still haven’t made an effort to interpret the primary literature on nutrition science. All the really big studies are observational, and there’s an art to controlling for confounders, and I believe it would take a lot of work for me to understand how they do it. I trust that at least some researchers have a good conception of how to disentangle causality (Willett &amp;amp; Skerrett, authors of &lt;em&gt;Eat, Drink, and Be Healthy&lt;/em&gt;, do a good job of explaining why they believe observational studies establish causality in certain cases.&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; After reading &lt;a href=&quot;https://nutritionfacts.org/book/how-not-to-die/&quot;&gt;How Not To Die&lt;/a&gt; in 2017, I believed processed unsaturated fats (such as olive oil) were unhealthy, and unsaturated fats should be consumed as whole foods (e.g. by eating nuts).&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I &lt;a href=&quot;https://mdickens.me/2024/05/23/notes_on_eat_drink_and_be_healthy/&quot;&gt;read&lt;/a&gt; &lt;em&gt;Eat, Drink, and Be Healthy&lt;/em&gt;, which said olive oil is healthy, and it presented some evidence that looked reasonable to me. There’s a plausible mechanism for oil being healthy (it helps the body produce HDL and HDL sucks loose cholesterol out of the arteries), and there are some empirical studies where oil (esp. olive oil) was associated with better health outcomes, including at least one RCT.&lt;/p&gt;

    &lt;p&gt;The argument against olive oil in &lt;em&gt;How Not to Die&lt;/em&gt; is that it’s processed to remove some of the nutrition of the olive, which makes it less healthy. That’s not wrong—raw olives are probably healthier than olive oil—but realistically I’m not gonna replace olive oil with eating handfuls of raw olives,&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; and the empirical evidence suggests that adding olive oil to a diet makes it healthier.&lt;/p&gt;

    &lt;p&gt;&lt;em&gt;How Not to Die&lt;/em&gt; recommends replacing olive oil with nuts. Some RCT evidence does suggest that nuts are healthier than olive oil, but nuts often don’t work as a substitute for oil (you can’t pan-fry food in a bed of nuts).&lt;/p&gt;

    &lt;p&gt;&lt;em&gt;How Not to Die&lt;/em&gt; is biased toward veganism, which I knew before I read it so I didn’t update much on the stuff about how all animal products are unhealthy, although I was vegan anyway so it didn’t affect my behavior. I largely trusted the book on non-animal subjects because it cited a lot of research and seemed well-reasoned. Based on my current understanding of mainstream positions among nutrition scientists, almost all of the non-animal stuff (and most of the animal stuff) in the book is mainstream, but it over-emphasizes the badness of processed foods in general. The mainstream position among nutrition scientists is that you should avoid “processed foods” as a general rule, but plenty of specific processed foods are fine, like olive oil or protein powder.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; The ideal BMI is around 18–20 (on the low end of the “healthy” range of 18.5 to 25).&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; My old belief wasn’t based on direct evidence. I just had a prior that official recommendations are gonna be too generous, for example recommending less exercise than is optimal because they don’t think people will actually do the optimal amount of exercise, or recommending a “healthy” BMI that’s actually a bit too generous because they think people will give up if they’re told to aim for a BMI of 20. I updated my belief after reading a &lt;a href=&quot;https://www.thelancet.com/journals/landia/article/PIIS2213-8587(18)30288-2/fulltext&quot;&gt;large study&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; on BMI and all-cause mortality. As of writing the first draft of this post, I weakly believed the ideal was on middle-high side of the “healthy” range (so around 22–23), but I wrote in my first draft, “I want to investigate this more.”&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind again:&lt;/strong&gt; I &lt;a href=&quot;https://mdickens.me/2024/05/05/healthiest_BMI/&quot;&gt;investigated more&lt;/a&gt;. (I had to take a diversion from writing the post you’re currently reading to write a different post about BMI.) Now I believe the ideal BMI is 20–22, for reasons I explain in the &lt;a href=&quot;https://mdickens.me/2024/05/05/healthiest_BMI/&quot;&gt;linked post&lt;/a&gt;. Lower than 20 is fine, maybe even better, if you have adequate lean mass. 22–23 appears to carry (slightly) greater health risks.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; I believed that I had seen an RCT that found that sunscreen didn’t work, and I had written that down in a note.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I looked up my note and saw that, in actuality, my note said that sunscreen &lt;em&gt;did&lt;/em&gt; work. I somehow flipped the sign of the outcome in my memory. I don’t understand how that happened.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;miscellaneous&quot;&gt;Miscellaneous&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; RCTs are high-quality evidence.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I learned about the replication crisis, and (later) I read some actual RCTs. I now believe the median RCT is pretty badly designed and it shouldn’t change your beliefs much unless you’ve actually read it and understood its methodology. And, perhaps as a corollary, the median scientist isn’t very smart. (This doesn’t necessarily follow because there are reasons why smart scientists might publish dumb papers.)&lt;/p&gt;

    &lt;p&gt;(Doctors and professors appear to have average IQs around 115–125—see &lt;a href=&quot;https://users.ssc.wisc.edu/~hauser/merit_01_081502_complete.pdf&quot;&gt;Meritocracy, Cognitive Ability, and the Sources of Occupational Success&lt;/a&gt;—which is a full standard deviation below the IQs of most of my friends,&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; and probably most of the people reading this. So maybe it’s fair after all to say the median scientist isn’t very smart. But you could still argue that a lifetime of expertise matters more than 15 IQ points.)&lt;/p&gt;

    &lt;p&gt;I see one area where scientists routinely mis-interpret their own evidence. They often struggle to understand the difference between&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;Our study found a large (in absolute terms) but non-significant effect because our study was underpowered.&lt;/li&gt;
      &lt;li&gt;Our study robustly established no effect: the standard error in our data was small enough that any meaningful effect would show up, and it didn’t.&lt;/li&gt;
    &lt;/ol&gt;

    &lt;p&gt;You especially see this fallacy in areas where small effect sizes still matter. For example, a 0.1% decrease in mortality risk matters a lot, but it’s very hard to detect with a study, and study authors often incorrectly conclude that the effect doesn’t exist when they fail to find it.&lt;/p&gt;

    &lt;p&gt;(David J. Balan says &lt;a href=&quot;https://www.overcomingbias.com/p/doctor-there-arhtml&quot;&gt;there are two kinds of “no evidence”&lt;/a&gt;. I’d say there are three kinds: (1) we haven’t looked for evidence; (2) we looked for evidence in a way that &lt;a href=&quot;https://en.wikipedia.org/wiki/Streetlight_effect&quot;&gt;wasn’t gonna find any evidence&lt;/a&gt;, and we didn’t find any evidence; (3) we looked for evidence in a way that would have found it if it existed, and we still didn’t find it.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; RCTs on strength training are basically useless. I read &lt;a href=&quot;https://www.amazon.com/Practical-Programming-Strength-Training-Rippetoe/dp/0982522754&quot;&gt;Practical Programming for Strength Training&lt;/a&gt;, which argued that strength coaches know better than researchers because RCTs are deeply flawed: they test a group of untrained individuals over 12 weeks or less, and that sort of training context doesn’t generalize to a more-experienced individual who follows a program for a year or longer.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; This one’s interesting because it’s the opposite of the previous line item.&lt;/p&gt;

    &lt;p&gt;I started listening to some more research-driven experts like &lt;a href=&quot;https://www.barbellmedicine.com/&quot;&gt;Barbell Medicine&lt;/a&gt; and &lt;a href=&quot;https://www.strongerbyscience.com/&quot;&gt;Stronger by Science&lt;/a&gt; and hearing their perspective on scientific studies. While it’s true that many (most?) sports science RCTs don’t generalize, there are plenty of studies that correct for the criticisms made by &lt;em&gt;Practical Programming for Strength Training&lt;/em&gt;.&lt;/p&gt;

    &lt;p&gt;Why the difference between “RCTs are bad” in the previous line item and “RCTs are good actually” now? As best I can figure, this is the deal:&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;RCTs are bad if you blindly accept all of them.&lt;/li&gt;
      &lt;li&gt;RCTs are good if you know how to read a study and understand where it does and does not generalize.&lt;/li&gt;
    &lt;/ol&gt;

    &lt;p&gt;The science popularizers I pay attention to know how to distinguish between bad and good studies, and how to synthesize commonalities that repeatedly appear in many studies.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; Before 2009, I disagreed with affirmative action because I thought it was anti-meritocratic.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; In 2009, I saw a debate in which the pro-affirmative action side won. I did not actually read the debate but I thought the pro side seemed credible so I changed my position.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind again:&lt;/strong&gt; I now disagree with affirmative action. The main argument that changed my mind was that affirmative action has been around for a generation so if it was going to work, we would certainly see the benefits by now, and we don’t.&lt;/p&gt;

    &lt;p&gt;To be a little more specific, there are (broadly speaking) two theories for why affirmative action ought to work:&lt;/p&gt;

    &lt;ol&gt;
      &lt;li&gt;Gatekeepers (e.g. hiring managers) unfairly discriminate against minorities, and affirmative action cancels this out.&lt;/li&gt;
      &lt;li&gt;Minorities underperform in some areas because they haven’t been given sufficient opportunities. Affirmative action gives them those opportunities so that they can get better.&lt;/li&gt;
    &lt;/ol&gt;

    &lt;p&gt;The first theory is pretty easy to test: if minorities outperform after being accepted, then they were being discriminated against. Empirically, we see the opposite—for example, racial minorities at universities have on average lower GPAs than whites. The exception is Asians, who do actually outperform, which suggests they’re &lt;a href=&quot;https://en.wikipedia.org/wiki/Students_for_Fair_Admissions_v._Harvard&quot;&gt;being discriminated against&lt;/a&gt;. But we don’t need affirmative action to fix anti-Asian discrimination, in fact it’s &lt;em&gt;caused&lt;/em&gt; by affirmative action.&lt;/p&gt;

    &lt;p&gt;You can test the second theory in a similar way: after being accepted, do minorities on average improve their performance? Empirically, they don’t. You can wiggle out of this by saying they still face hurdles even after being accepted to (e.g.) university. But even the children of under-represented minorities who went to elite colleges still underperform (on average). If affirmative action doesn’t improve outcomes even for the children of the beneficiaries, it’s probably never going to work.&lt;/p&gt;

    &lt;p&gt;That doesn’t prove that minorities &lt;em&gt;don’t&lt;/em&gt; face hardships that hamper their performance, it just proves that affirmative action doesn’t do anything to rectify those hardships.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; Regulation is basically good; free markets often hurt people.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I now believe there’s way too much regulation and free markets are almost always good. I wouldn’t go so far as to say &lt;em&gt;all&lt;/em&gt; regulations are bad, but I think the developed world would be much better off if lawmakers simply deleted 75% of existing regulations. Nor would I say free markets are &lt;em&gt;always&lt;/em&gt; good, but I’d guess that the ratio of economic problems caused by market restrictions to problems caused by overly free markets is about 20:1.&lt;/p&gt;

    &lt;p&gt;I can’t pinpoint a specific period where I changed my mind but it happened somewhere between the beginning and the end of college. The main things that changed my mind were (1) getting better at applying basic economic reasoning (I took econ in 11th grade but I didn’t start applying it to life until later) and (2) reading economist polls and updating toward economists’ beliefs.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; Maybe if I force myself to go to enough social events, I will learn to enjoy them.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I went to a whole bunch of social events that I didn’t want to go to and I never learned to enjoy them. I now believe it’s better to simply not go to social events that I don’t want to go to.&lt;/p&gt;

    &lt;p&gt;(People often tell me things like, “You should come, you’ll start enjoying it once you get there!” These people are badly failing to model the fact that my brain does not work the same as their brain.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; In 8th grade Spanish class, the teacher had an accent I was unfamiliar with. I thought he was mis-pronouncing certain Spanish words.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I learned that his pronunciations were a regional accent.&lt;/p&gt;

    &lt;p&gt;I don’t expect 8th graders to have particularly good reasoning abilities but this mistake feels especially severe. Surely I could have figured out that I, a kid who has taken one year of Spanish, do not know more about Spanish pronunciation than this guy who is a native Spanish speaker and teaches Spanish.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; Nuclear power is too dangerous, mainly because we can’t safely manage radioactive waste.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; In 2010, I heard about Kahan et al.’s &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1549444&quot;&gt;Cultural Cognition of Scientific Consensus&lt;/a&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;. The paper studied how people form beliefs on two issues where many people disagree with the scientific consensus—climate change and the disposal of nuclear wastes. Kahan et al. cited a &lt;a href=&quot;https://www.nrc.gov/docs/ML0413/ML041330436.pdf&quot;&gt;consensus report&lt;/a&gt;&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; where scientists agreed that radioactive waste can be disposed of safely. This was the first time I came in contact with the notion that the scientific consensus supports nuclear power. I knew very little about nuclear waste disposal (and still know little), but I changed my belief to align with the scientific consensus.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;My old belief:&lt;/strong&gt; Creationists are uniquely bad at reasoning.&lt;/p&gt;

    &lt;p&gt;&lt;strong&gt;What changed my mind:&lt;/strong&gt; I know very smart and educated people who believe things that are about as obviously-wrong as creationism (such as the labor theory of value, or that infants are “blank slates”,&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt; or nuclear waste can’t be disposed of safely—wait, that last one was me).&lt;/p&gt;

    &lt;p&gt;I now believe it’s basically impossible to avoid believing dumb things. I still think creationism is extremely wrong, but I have a lot of sympathy for creationists, and I don’t think they’re particularly worse at reasoning than anyone else. I believe you should &lt;a href=&quot;https://slatestarcodex.com/2019/02/26/rule-genius-in-not-out/&quot;&gt;rule thinkers in, not out&lt;/a&gt;.&lt;/p&gt;

    &lt;p&gt;The thing is, if you live in a bubble, you might not ever hear the arguments for why the labor theory of value is wrong, or why blank slatism is wrong. Or you might hear arguments but not good ones, perhaps because knowledgeable people &lt;a href=&quot;https://www.astralcodexten.com/p/contra-kavanaugh-on-fideism&quot;&gt;would rather make fun of you than try to persuade you&lt;/a&gt;. So I find it totally understandable that people hold on to these beliefs. And I think of creationism the same way—some people have the misfortune to live in bubbles where everyone around them believes in creationism and they never hear good counter-arguments.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h1 id=&quot;what-patterns-emerge&quot;&gt;What patterns emerge?&lt;/h1&gt;

&lt;p&gt;Did I systematically change my mind in certain ways? Can I predict how I might change my mind in the future?&lt;/p&gt;

&lt;p&gt;I can see four broad patterns:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Initially, I overconfidently believed the first credible-sounding thing I heard. Later, I moderated my beliefs when I learned about other perspectives.&lt;/li&gt;
  &lt;li&gt;I investigated an area where not much is known, so I had to figure things out on my own. My initial conclusion was wrong, and I changed my mind by investigating more deeply.&lt;/li&gt;
  &lt;li&gt;I used to believe individual scientific studies, and now I don’t give them much credibility.&lt;/li&gt;
  &lt;li&gt;I pay more attention to the scientific consensus.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On #4, it’s not so much that I ever wanted to disagree with the scientific consensus, but that it’s hard to know what scientists believe. On economics, nutrition, and nuclear power, I used to disagree with the consensus, but only because I didn’t know what the consensus &lt;em&gt;was&lt;/em&gt;.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;My argument went like this:&lt;/p&gt;

      &lt;p&gt;From a financial perspective, buying a house is isomorphic to buying a rental property and renting it out, while also paying rent to live at a second, identical house. A rental property is not a good investment because it’s highly non-diversified. If you shouldn’t buy a house as a rental property, then you shouldn’t buy a house to live in, either.&lt;/p&gt;

      &lt;p&gt;(People can pretend houses aren’t risky because the price doesn’t update on a minute-to-minute basis, but a single house is a much riskier investment than an index fund.) &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;There are some caveats to this:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Buying a house only works as a hedge if you plan on living there forever. If you ever plan on moving, you’re subject to fluctuations in the price of your current house relative to that of your future house.&lt;/li&gt;
        &lt;li&gt;It only works if you get a competitive mortgage interest rate and if the value of the hedge balances out the opportunity cost of pouring a bunch of money at once into a house.&lt;/li&gt;
        &lt;li&gt;Houses have maintenance expenditures and the like, although this should be priced in to rent.&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I read Starting Strength based on a recommendation in a &lt;a href=&quot;https://www.lesswrong.com/posts/iTzvJ7kKK2TYJhYHB/solved-problems-repository?commentId=nuonKebTWiTnoWJMC&quot;&gt;LessWrong comment&lt;/a&gt;: “The set of my friends who are strong is exactly the set of my friends who do / have done Starting Strength or a close variant.” &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I like what Mike Israetel said in an &lt;a href=&quot;https://www.youtube.com/watch?v=hO0F9L_Iuuo&amp;amp;t=286s&quot;&gt;interview&lt;/a&gt; (paraphrased): “If you can point out a dude to me and tell [based on how he looks] that he trains with sets of 20–30 and point out another dude and say that dude trains with sets of 5–8, I’d be super impressed, because I can’t.” &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The standard rule of thumb is the “talk test”: if you can carry out a conversation with a little bit of difficulty, it’s the right intensity. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This sentence was true when I originally wrote it. Now I’m revising and I feel the need to note that I missed a day last week because I was sick. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ekkekakis et al. (2022–2023). Extraordinary Claims in the Literature of High-Intensity Interval Training.&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1007/s40279-023-01880-7&quot;&gt;I. Bonafide Scientific Revolution or a Looming Crisis of Replication and Credibility?&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1123/kr.2022-0003&quot;&gt;II. Are The Extraordinary Claims Supported by Extraordinary Evidence?&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1016/j.psychsport.2023.102399&quot;&gt;III. Critical analysis of four foundational arguments from an interdisciplinary lens.&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1016/j.psychsport.2022.102295&quot;&gt;IV. Is HIIT associated with higher long-term exercise adherence?&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1123/jsep.2022-0027&quot;&gt;A Methodological Checklist of Studies for Pleasure and Enjoyment Responses to High-Intensity Interval Training: Part I. Participants and Measures.&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://doi.org/10.1123/jsep.2022-0029&quot;&gt;A Methodological Checklist of Studies for Pleasure and Enjoyment Responses to High-Intensity Interval Training: Part II. Intensity, Timing of Assessments, Data Modeling, and Interpretation.&lt;/a&gt;&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I generally like the epistemology of the Barbell Medicine guys, but we do disagree sometimes. I’ve noticed a common pattern in how we disagree.&lt;/p&gt;

      &lt;p&gt;Say there’s a question about whether A is good, and the state of the evidence is that&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;There’s some theoretical reason to expect A to be good&lt;/li&gt;
        &lt;li&gt;Some limited empirical research has failed to find that A is good&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;In that case, they believe A is not good, and I believe A is good.&lt;/p&gt;

      &lt;p&gt;For example, they say you shouldn’t take a multivitamin because RCTs generally haven’t found benefits. I say you &lt;em&gt;should&lt;/em&gt; take a multivitamin because they’re cheap and we know vitamins are important in principle.&lt;/p&gt;

      &lt;p&gt;I believe scientists and doctors tend to overweight weakly negative empirical findings relative to theory. I agree with what Scott Alexander wrote in &lt;a href=&quot;https://slatestarcodex.com/2020/04/14/a-failure-but-not-of-prediction/&quot;&gt;A Failure, But Not Of Prediction&lt;/a&gt;: authority figures said (in April 2020, when he wrote the post) that masks don’t prevent the spread of disease because there’s no supporting RCT evidence. But Scott believes masks are worth using because there’s good theoretical reason to expect them to work.&lt;/p&gt;

      &lt;p&gt;&lt;img src=&quot;https://slatestarcodex.com/blog_images/goofus.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

      &lt;p&gt;The Barbell Medicine guys understand perfectly well that sometimes theory matters more than empirical evidence, it’s just that I tend to favor theory a little bit more than they do. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For example, I was concerned that a lot of associations between diet and health were confounded by socioeconomic class (rich people eat more veggies) or by conscientiousness (conscientious people eat more veggies plus they’re less inclined to over-eat). But Willett &amp;amp; Skerrett give evidence that some nutritional findings can’t be explained by class or conscientiousness.&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;On socioeconomic class: Greeks today and Chinese people in the 1990s were healthier than Americans even though they were poorer.&lt;/li&gt;
        &lt;li&gt;On conscientiousness: Americans who conscientiously followed the 1990s USDA guidelines had worse health outcomes (because the guidelines were dumb).&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;(Also: Isn’t it kind of insane that people who followed the USDA guidelines had worse health outcomes? In general, every diet works, whether it’s low-carb, low-fat, paleo, or whatever, because diets force you to pay more attention to what you eat and restrict your caloric intake. It’s perversely impressive that the USDA managed to come up with a diet (possibly the only diet ever?) that actually makes health worse.) &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Where do you even buy raw olives? I’ve only ever seen jarred or canned olives soaked in brine. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bhaskaran, K., dos-Santos-Silva, I., Leon, D. A., Douglas, I. J., &amp;amp; Smeeth, L. (2018). &lt;a href=&quot;https://www.thelancet.com/journals/landia/article/PIIS2213-8587(18)30288-2/fulltext&quot;&gt;Association of BMI with overall and cause-specific mortality: a population-based cohort study of 3-6 million adults in the UK.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Earlier in this post, I wrote about how most caffeine studies had bad methodology. As a test, I described the methodology of one study to a friend and asked them what they thought about it, and they immediately pointed out the same flaw that I had noticed. So clearly it’s not just me. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kahan, D. M., Jenkins‐Smith, H., &amp;amp; Braman, D. (2011). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1549444&quot;&gt;Cultural cognition of scientific consensus.&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;(Note: The cited version was published in 2011 but the paper was originally posted online in 2010.) &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;National Research Council. Board on Radioactive Waste Management. (1990). &lt;a href=&quot;https://www.nrc.gov/docs/ML0413/ML041330436.pdf&quot;&gt;Rethinking high-level radioactive waste disposal: A position statement of the Board on Radioactive Waste Management, Commission on Geosciences, Environment, and Resources, National Research Council.&lt;/a&gt; &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;It would derail the post too much for me to explain why I believe these beliefs are on par with creationism, but I can throw out some links:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Criticisms_of_the_labour_theory_of_value&quot;&gt;Criticisms of the labor theory of value&lt;/a&gt; on Wikipedia&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://stevenpinker.com/files/pinker/files/the_blank_slate_general_psychologist.pdf&quot;&gt;The Blank Slate&lt;/a&gt;, an article by Steven Pinker that summarizes his book, titled (surprise!) &lt;em&gt;The Blank Slate&lt;/em&gt;&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Notes on Eat, Drink, and Be Healthy</title>
				<pubDate>Thu, 23 May 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/05/23/notes_on_eat_drink_and_be_healthy/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/05/23/notes_on_eat_drink_and_be_healthy/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;I recently read &lt;a href=&quot;https://www.amazon.com/Eat-Drink-Be-Healthy-Harvard/dp/1501164775&quot;&gt;Eat, Drink, and Be Healthy: The Harvard Medical School Guide to Healthy Eating&lt;/a&gt;. As I understand, it’s the book that does the best job of representing the mainstream scientific perspective on nutrition for a lay audience. Here are the notes I took.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Last updated 2024-09-02.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;I wrote down info that I found personally useful, which often doesn’t translate to other people. For example, if the book gave a nutrition fact that I already confidently believed, I didn’t write it down, but someone else might have benefited from reading that fact.&lt;/p&gt;

&lt;p&gt;Unless otherwise specified, these notes represent my interpretation of the author’s perspective and first-person pronouns represent the author, not me. Any sentence preceded by “me:” is my perspective.&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#chapters-13-healthy-eating-matters&quot; id=&quot;markdown-toc-chapters-13-healthy-eating-matters&quot;&gt;Chapters 1–3: Healthy Eating Matters&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#chapter-4-healthy-weight&quot; id=&quot;markdown-toc-chapter-4-healthy-weight&quot;&gt;Chapter 4: Healthy Weight&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#chapter-5-straight-talk-about-fat&quot; id=&quot;markdown-toc-chapter-5-straight-talk-about-fat&quot;&gt;Chapter 5: Straight Talk About Fat&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#chapter-6-carbohydrates-for-better-and-worse&quot; id=&quot;markdown-toc-chapter-6-carbohydrates-for-better-and-worse&quot;&gt;Chapter 6: Carbohydrates for Better and Worse&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#chapter-7-choose-healthier-sources-of-protein&quot; id=&quot;markdown-toc-chapter-7-choose-healthier-sources-of-protein&quot;&gt;Chapter 7: Choose Healthier Sources of Protein&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#chapter-8-eat-plenty-of-fruits-and-vegetables&quot; id=&quot;markdown-toc-chapter-8-eat-plenty-of-fruits-and-vegetables&quot;&gt;Chapter 8: Eat Plenty of Fruits and Vegetables&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#chapter-9-you-are-what-you-drink&quot; id=&quot;markdown-toc-chapter-9-you-are-what-you-drink&quot;&gt;Chapter 9: You Are What You Drink&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#chapters-1011-vitamins-and-minerals&quot; id=&quot;markdown-toc-chapters-1011-vitamins-and-minerals&quot;&gt;Chapters 10–11: Vitamins and Minerals&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes-from-red-pen-reviews&quot; id=&quot;markdown-toc-notes-from-red-pen-reviews&quot;&gt;Notes from Red Pen Reviews&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#footnotes&quot; id=&quot;markdown-toc-footnotes&quot;&gt;Footnotes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;chapters-13-healthy-eating-matters&quot;&gt;Chapters 1–3: Healthy Eating Matters&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Soybean oil and canola oil are healthy&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;The association between “healthy” foods and good health can’t be explained by socioeconomic status or conscientiousness because Greeks are healthier but poorer, and people who followed USDA guidelines did worse than people who followed “good” diets
    &lt;ul&gt;
      &lt;li&gt;me: This comment gave the author more credibility in my mind. I can’t easily the author’s evaluate empirical claims, but I can evaluate logic, and I thought this was a strong logical argument&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;American diets are getting healthier over the last 20 years. (me: so why are people getting fatter?)&lt;/li&gt;
  &lt;li&gt;RCTs are expensive. A nutrition and breast cancer study cost $2 billion and got inconclusive results. We mainly do cohort studies which follow a group of people for a time, they have less room for bias than retrospective self-reports / asking people with a disease about their eating habits&lt;/li&gt;
  &lt;li&gt;Nurses’ Health Study is good because nurses are more diligent and accurate about reporting what they eat&lt;/li&gt;
  &lt;li&gt;Japanese migrants in America have American levels of heart disease, which shows it’s not genetic&lt;/li&gt;
  &lt;li&gt;Moderate alcohol does actually prevent heart disease. Experiments show it raises HDL&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;chapter-4-healthy-weight&quot;&gt;Chapter 4: Healthy Weight&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;me: I examined BMI in more detail in &lt;a href=&quot;https://mdickens.me/2024/05/05/healthiest_BMI/&quot;&gt;What’s the Healthiest BMI?&lt;/a&gt;. &lt;em&gt;Eat, Drink, and Be Healthy&lt;/em&gt; primarily cites the same two meta-analyses that I cited (and the book’s primary author also co-authored one of the meta-analyses (along with 61 other authors lol))&lt;/li&gt;
  &lt;li&gt;Higher BMI is worse even within the healthy weight range, see Nurses’ Health Study
    &lt;ul&gt;
      &lt;li&gt;me: the reported data looks at CVD and diabetes and stuff but not at respiratory disease&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Guidelines say BMI should be 18.5 to 25, that’s the cohort with lowest death rate after controlling for smoking etc. But really health starts getting worse at BMI &amp;gt; 22&lt;/li&gt;
  &lt;li&gt;BMI = weight (kg) / height (m)^2 or weight (lbs) / height (in)^2 * 703&lt;/li&gt;
  &lt;li&gt;BMI under 18.5 is bad if you’re sick but it’s fine if you’re just thin
    &lt;ul&gt;
      &lt;li&gt;me: for my perspective on this, see &lt;a href=&quot;https://mdickens.me/2024/05/22/healthiest_body_composition/&quot;&gt;What’s the Healthiest Body Composition?&lt;/a&gt;, especially the bits about Lee et al. (2018). the claim appears to be true, but there are approximately zero people with healthy lean mass and a BMI under 18.5&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;chapter-5-straight-talk-about-fat&quot;&gt;Chapter 5: Straight Talk About Fat&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Belly fat is worse than hip/butt fat. Unclear why, maybe because it affects hormones more&lt;/li&gt;
  &lt;li&gt;Controlling weight is the most important component of diet. Luckily, a healthy diet is a subset of diets that make it hard to overeat (me: fad diets can be good for controlling weight but aren’t optimal for health). Ex: fiber is satiating and also good for digestive health&lt;/li&gt;
  &lt;li&gt;Insulin makes your body convert calories to fat and reluctant to burn fat; people with high insulin will be hungry even if they have plentiful fat stores. That’s a reason to prefer foods with low glycemic index&lt;/li&gt;
  &lt;li&gt;Proteins/fats stay in the stomach for longer which reduces hunger&lt;/li&gt;
  &lt;li&gt;45% of a database of people who lost weight didn’t use a Diet, they “did it themselves”&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Unsaturated fat lowers LDL (“bad”) cholesterol and raises HDL (“good”) cholesterol (compared to carbs); saturated raises both; trans raises LDL&lt;/li&gt;
  &lt;li&gt;Fat is not water soluble, so it is transmitted through bloodstream by protein packages (lipoproteins)&lt;/li&gt;
  &lt;li&gt;Saturated fats have varying effects. Coconut oil raises HDL more than beef or butter fat, but its effect on LDL still makes it net unhealthy&lt;/li&gt;
  &lt;li&gt;LDL accumulates in blood vessels—it’s low density so it’s more likely to get stuck. HDL removes LDL from blood vessels&lt;/li&gt;
  &lt;li&gt;Omega-6 doesn’t increase inflammation, it reduces it. The 3:6 ratio didn’t matter in the Nurses’ Health Study&lt;/li&gt;
  &lt;li&gt;Eggs are fine. Low in saturated fat&lt;/li&gt;
  &lt;li&gt;Nuts beat olive oil in RCTs&lt;/li&gt;
  &lt;li&gt;Red meat is bad for colon cancer, which could be b/c of fat content or b/c of chemicals generated by cooking at high temperatures. But biggest risk factor for colon cancer is being overweight&lt;/li&gt;
  &lt;li&gt;ALA associated with prostate cancer, but likely b/c ALA-rich fats used to be partially hydrogenated most of the time. Walnuts are high in ALA and aren’t associated with prostate cancer. Future studies will see if the ALA-cancer connection persists now that trans fats are mostly gone from diets, But you probably shouldn’t worry about ALA&lt;/li&gt;
  &lt;li&gt;Dietary fat has no detectable association with cancer. Saturated and trans fats are bad for heart disease so that’s what we should focus on&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;chapter-6-carbohydrates-for-better-and-worse&quot;&gt;Chapter 6: Carbohydrates for Better and Worse&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Grains are optional&lt;/li&gt;
  &lt;li&gt;Carbs looked healthy in old China studies but don’t look healthy in the west. Likely due to (1) Chinese were more physically active and (2) they ate whole or lightly refined grains. The badness of carbs mainly comes when you don’t burn them off (?)&lt;/li&gt;
  &lt;li&gt;Simple vs complex carb isn’t that important. Glycemic index (GI) and refined vs whole are more important. Glycemic index, not carb complexity, determines blood sugar&lt;/li&gt;
  &lt;li&gt;Potatoes and corn are fast digesting. Corn is technically whole grain but it’s been bred to be fast digesting so it’s more like white rice&lt;/li&gt;
  &lt;li&gt;Fast digesting carbs spike blood sugar then spike insulin so your body quickly absorbs the blood sugar, now you have low blood sugar which triggers hunger&lt;/li&gt;
  &lt;li&gt;Table sugar and corn syrup have the same impact on blood sugar and metabolism&lt;/li&gt;
  &lt;li&gt;Insulin resistance means cells resist the signal telling them to absorb sugar, so blood sugar stays elevated for longer. Insulin-producing cells in the pancreas get overworked and stop working&lt;/li&gt;
  &lt;li&gt;Contributors to diabetes: obesity; sedentariness, because muscles are good at consuming glucose; low polyunsaturated fatty acid (PUFA) and high saturated fat; genetics&lt;/li&gt;
  &lt;li&gt;Finely ground whole wheat is high GI but it’s still healthy due to fiber and nutrients&lt;/li&gt;
  &lt;li&gt;Low-GI foods help prevent diabetes&lt;/li&gt;
  &lt;li&gt;High-fiber cereal helps prevent diabetes&lt;/li&gt;
  &lt;li&gt;Early studies showed fiber reduced gut cancer, but later studies e.g. Nurses’ Health showed no effect&lt;/li&gt;
  &lt;li&gt;Whole grains have 1:10 or 1:5 fiber:carb ratio&lt;/li&gt;
  &lt;li&gt;Added fiber like cellulose isn’t as good as whole fiber b/c it’s missing micronutrients and it doesn’t encapsulate the carbs to slow digestion (book calls added fibers “fake fiber”)&lt;/li&gt;
  &lt;li&gt;Eat whole grain cereal for breakfast. Ex: Wheaties, Grape Nuts, Kashi, Shredded Wheat, Wheat Chex&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;chapter-7-choose-healthier-sources-of-protein&quot;&gt;Chapter 7: Choose Healthier Sources of Protein&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;BCAAs turn up IGF-1 which helps muscles grow but also cancer&lt;/li&gt;
  &lt;li&gt;Processed meat causes cancer; red meat probably causes cancer. Mainly colorectal&lt;/li&gt;
  &lt;li&gt;Country-level data shows plant proteins associated with less cancer than animal proteins, but country data is highly confounded. It’s probably the stuff that comes with the protein that matters, not the pure protein&lt;/li&gt;
  &lt;li&gt;ALA reduces clotting&lt;/li&gt;
  &lt;li&gt;Soy effect on heart disease is overstated. What studies actually show is that heart disease is reduced when you replace red meat with soy&lt;/li&gt;
  &lt;li&gt;Soy contains phytoestrogens which may prevent breast cancer. Phytoestrogens act like estrogen in some places and block it in others. It appears it blocks estrogen in cancer cells and estrogen stimulates growth&lt;/li&gt;
  &lt;li&gt;Some fish contain mercury but many species (e.g. salmon) don’t have concerning amounts. Don’t worry about it unless you’re a child or pregnant. Fish oil supplements have less mercury than fish but don’t show the same health benefits&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;chapter-8-eat-plenty-of-fruits-and-vegetables&quot;&gt;Chapter 8: Eat Plenty of Fruits and Vegetables&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;The effects of the vast majority of plant chemicals have yet to be determined. We don’t even know most of the chemicals that are in plants&lt;/li&gt;
  &lt;li&gt;Fruit and veg, especially greens and citrus (including juice), lower blood pressure and reduce stroke. Folic acid supplements work too. Fruit and veg shown to reduce blood pressure in RCT&lt;/li&gt;
  &lt;li&gt;Cataracts and macular degeneration may be caused by free radicals&lt;/li&gt;
  &lt;li&gt;1/3 of cancer is explained by diet. Possibly an overestimate&lt;/li&gt;
  &lt;li&gt;One study found that fruit in adolescence does more to prevent cancer than fruit in middle age&lt;/li&gt;
  &lt;li&gt;Berries are S tier fruit, especially blueberries&lt;/li&gt;
  &lt;li&gt;Fiber sticks to cholesterol and you poop it out&lt;/li&gt;
  &lt;li&gt;Newest studies show fiber doesn’t help with colon cancer. But it does still stabilize blood sugar which reduces diabetes, lowers triglycerides, and improves gut microbiome&lt;/li&gt;
  &lt;li&gt;It looks like many diseases like CVD and cancer are driven by deficiencies in some phytonutrients, but we don’t know which. Folate is probably one&lt;/li&gt;
  &lt;li&gt;Eat a variety of colors. Get one serving a day each of: dark leafy greens, yellow/orange fruit/veg, red fruit/veg, legumes, citrus&lt;/li&gt;
  &lt;li&gt;Cooked tomatoes are better than raw b/c your body has a hard time absorbing lycopene from raw tomatoes&lt;/li&gt;
  &lt;li&gt;Some veggies have chemicals that are bad if you eat too much, but it’s hard to eat too much&lt;/li&gt;
  &lt;li&gt;Juice and smoothies are bad because (1) easy to over-eat and (2) juice makes sugar absorb faster (me: they didn’t say if smoothies do that)
    &lt;ul&gt;
      &lt;li&gt;me: I spent a little time online trying to determine if smoothies are healthy or not (ignoring calorie content), my main concern being whether blending destroys fiber/nutrients. I found zero relevant papers on Google Scholar. I found various people, including dietitians (e.g. &lt;a href=&quot;https://www.hopkinsmedicine.org/health/wellness-and-prevention/how-to-make-a-healthy-smoothie&quot;&gt;on Hopkins Medicine&lt;/a&gt;) asserting without evidence that blending doesn’t destroy nutrients. So I assume smoothies are healthy but I don’t really know&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;chapter-9-you-are-what-you-drink&quot;&gt;Chapter 9: You Are What You Drink&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Milk in adolescence increases height which increases hip fractures&lt;/li&gt;
  &lt;li&gt;Coffee and tea reduce gallstones, possibly b/c they stimulate gall bladder activity&lt;/li&gt;
  &lt;li&gt;Coffee, even decaf, reduces diabetes. Possibly due to antioxidants&lt;/li&gt;
  &lt;li&gt;Coffee reduces Parkinson’s&lt;/li&gt;
  &lt;li&gt;Tea has flavonoids which may reduce CVD&lt;/li&gt;
  &lt;li&gt;Alcohol raises HDL. CVD benefit at 1-2 drinks per day for men. Can be any kind of alcohol. Benefits of wine specifically are unproven&lt;/li&gt;
  &lt;li&gt;Alcohol raises breast cancer risk in women even at half a drink per day. Folate counteracts this&lt;/li&gt;
  &lt;li&gt;Alcohol is net harmful for young men due to low CVD risk, and net beneficial for older men. Unclear for women, probably net positive unless you have family history of breast cancer&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;chapters-1011-vitamins-and-minerals&quot;&gt;Chapters 10–11: Vitamins and Minerals&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Hard to tell how much calcium people need because outer shells of bones can absorb calcium quickly but without increasing inner bone density&lt;/li&gt;
  &lt;li&gt;Insufficient vitamins can contribute to CVD and cancer even without deficiency disease&lt;/li&gt;
  &lt;li&gt;Folic acid started being added to grains to prevent birth defects but it accidentally reduced CVD and cancer too&lt;/li&gt;
  &lt;li&gt;Vitamin A helps regulate cell division and thus prevent cancer&lt;/li&gt;
  &lt;li&gt;Beta carotene isn’t vitamin A but your body turns it into vitamin A&lt;/li&gt;
  &lt;li&gt;Excess pre-formed vitamin A blocks vitamin D. Better to get vitamin A from beta carotene than from pre-formed vitamin A (retinol)&lt;/li&gt;
  &lt;li&gt;Free radicals are positively charged so they can steal electrons from DNA or cholesterol or other important things in your body&lt;/li&gt;
  &lt;li&gt;Early studies showed benefits but recent large RCTs don’t find benefits to antioxidant supplements&lt;/li&gt;
  &lt;li&gt;James Watson suggests antioxidant supplements are bad because free radicals kill cancer cells&lt;/li&gt;
  &lt;li&gt;Antioxidants are good if you get them from food&lt;/li&gt;
  &lt;li&gt;Excess iron generates free radicals. Some evidence suggests excess iron causes heart disease and cancer, but jury is still out&lt;/li&gt;
  &lt;li&gt;Body does a good job of passing on unneeded iron when it comes from plants but not when it comes from meat (me: what about supplements?)&lt;/li&gt;
  &lt;li&gt;I recommend taking a multivitamin that doesn’t contain iron&lt;/li&gt;
  &lt;li&gt;Sodium RDA is 2300mg but most people need less than 1000mg&lt;/li&gt;
  &lt;li&gt;American Heart Association recommends a max of 1500mg sodium&lt;/li&gt;
  &lt;li&gt;Selenium probably doesn’t matter&lt;/li&gt;
  &lt;li&gt;Take a multivitamin that contains the nutrients people tend to miss: beta carotene, B6, B12, folic acid, D, E, iron, zinc (me: what about earlier claim to not supplement iron?). No more than 2000IU pre-formed vitamin A&lt;/li&gt;
  &lt;li&gt;Basic One multivitamin is good. Menstruating women get the version with iron, others don’t need the iron
    &lt;ul&gt;
      &lt;li&gt;me: Compared to normal multivitamins, &lt;a href=&quot;https://coopercomplete.com/product/basic-one-multivitamin-iron-free/&quot;&gt;Basic One&lt;/a&gt; has considerably more D3, E, and B6, and &lt;em&gt;way&lt;/em&gt; more B12; added selenium, chromium, and lycopene (an antioxidant found in many red fruits/veggies); and its vitamin A comes exclusively in the form of beta carotene (no pre-formed vitamin A)&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;notes-from-red-pen-reviews&quot;&gt;Notes from Red Pen Reviews&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This section was added on 2024-05-31.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.redpenreviews.org/&quot;&gt;Red Pen Reviews&lt;/a&gt;, a website that reviews the scientific accuracy of books on nutrition, &lt;a href=&quot;https://www.redpenreviews.org/reviews/eat-drink-and-be-healthy/&quot;&gt;reviewed Eat, Drink, and Be Healthy&lt;/a&gt; and gave it the highest score of any book it’s reviewed.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; It did, however, have some minor quibbles with the book. It disputed one claim from the book, found that the book overstated the strength of evidence in two of its references, and sort-of disputed the book’s position on multivitamins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claim from the book:&lt;/strong&gt; “Protein sources from plants and lean meats such as chicken or fish are likely more beneficial than protein from red and processed meat. Protein from soy, however, is less well-understood.”&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;There’s mixed evidence on whether plants and lean meats are better for cardiovascular disease than red meat. It seems broadly true but there’s some conflicting evidence.&lt;/li&gt;
  &lt;li&gt;The skepticism about soy is based on old studies, and more recent studies find that soy is beneficial.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Reference from the book:&lt;/strong&gt; “In an analysis my colleagues and I did among more than 43,000 men, intake of total protein was minimally associated with heart disease risk, while intake of protein from meat was associated with higher risk.”&lt;/p&gt;

&lt;p&gt;Among all participants, the correlation between protein source and heart disease risk was non-significant. The correlation only became significant when the analysis was restricted to “healthy” patients.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference from the book:&lt;/strong&gt; “Dark leafy green vegetables contain two pigments, lutein and zeaxanthin, that accumulate in the eye. These two, along with phytochemicals called carotenoids, can snuff out free radicals before they can harm the eye’s sensitive tissues.”&lt;/p&gt;

&lt;p&gt;The cited study established that lutein and zeaxanthin are good for eye health, but the study did not examine mechanisms. The proposed mechanism is plausible, but not supported by the cited study.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claim from the book:&lt;/strong&gt; “Take a daily multivitamin.”&lt;/p&gt;

&lt;p&gt;Some, such as the president of the Australian Medical Association, say multivitamins are a waste. Some good studies showed multivitamins had no beneficial effects. But other good studies showed multivitamins did have beneficial effects. So the claim that you should take a multivitamin is controversial.&lt;/p&gt;

&lt;p&gt;My personal take: multivitamins are cheap and easy, so they clearly pass a cost-benefit analysis, even if there’s a good chance that they don’t work.&lt;/p&gt;

&lt;h1 id=&quot;footnotes&quot;&gt;Footnotes&lt;/h1&gt;


&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I wrote this down because I had recently learned about the seed oil theory of obesity. Now, after having looked into it further, I’m reasonably confident that the seed oil theory (1) is fringe among nutrition scientists, (2) contradicts known biological mechanisms, and (3) contradicts most empirical findings from RCTs and cohort studies on the relationships between food and health. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I come from the resistance training world where the preferred method of losing weight is to just eat less. I thought maybe we were a bunch of weirdos but this shows that the “eat less” method is pretty common among the general population. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;That’s why I read the book in the first place. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>What's the Healthiest Body Composition?</title>
				<pubDate>Wed, 22 May 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/05/22/healthiest_body_composition/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/05/22/healthiest_body_composition/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;a href=&quot;https://mdickens.me/2024/05/05/healthiest_BMI/&quot;&gt;Last time&lt;/a&gt;, I found that the healthiest BMI range for all-cause mortality is 20–22. But BMI doesn’t tell the whole story. Most obviously, it doesn’t account for body fat vs. lean mass. All else equal, you’d rather have more muscle&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and less fat.&lt;/p&gt;

&lt;p&gt;So what’s the healthiest combination of lean mass + fat mass?&lt;/p&gt;

&lt;p&gt;I’m not going to answer that question because I can’t. Instead, I will explain why I can’t, and then give a rough guess at the answer.&lt;/p&gt;

&lt;p&gt;Scientists have been measuring and collecting data on BMI for decades. You can find plenty of giant BMI studies with three million participants in various countries.&lt;/p&gt;

&lt;p&gt;We have much sparser data on body fat. Scientists didn’t start collecting data on body fat until the last few decades. And body fat is harder to measure—we have various methods for estimating body fat, but they’re all more complicated than calculating BMI.&lt;/p&gt;

&lt;p&gt;I managed to scrounge together some studies on body fat and mortality. &lt;strong&gt;My best guess: the average woman should aim for a BMI of 21 with 20% body fat, and the average man a BMI of 21 with 10% body fat.&lt;/strong&gt; (Subject to individual variation due to genetics and whatnot.)&lt;/p&gt;

&lt;p&gt;Trans men should probably target the same body fat % as cis men, and likewise for trans women and cis women, because hormone therapy alters body fat distribution (&lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7061235/&quot;&gt;Spanos et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;The evidence weakly suggests that there is no lower bound on healthy fat mass, and no upper bound on healthy lean mass. We have so little mortality data on extremely lean + muscular people that we can’t say how healthy they are.&lt;/p&gt;

&lt;p&gt;A more in-depth analysis would look at a variety of health indicators (blood pressure, HDL cholesterol, etc.) and use that to predict mortality. I didn’t do that, I just looked at mortality data.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#research-on-body-composition-and-mortality&quot; id=&quot;markdown-toc-research-on-body-composition-and-mortality&quot;&gt;Research on body composition and mortality&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#research-on-waist-circumference&quot; id=&quot;markdown-toc-research-on-waist-circumference&quot;&gt;Research on waist circumference&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#research-on-the-relationship-between-bmi-and-body-fat&quot; id=&quot;markdown-toc-research-on-the-relationship-between-bmi-and-body-fat&quot;&gt;Research on the relationship between BMI and body fat&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#good-research-that-nonetheless-didnt-answer-my-question&quot; id=&quot;markdown-toc-good-research-that-nonetheless-didnt-answer-my-question&quot;&gt;Good research that nonetheless didn’t answer my question&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#is-there-an-upper-bound-on-healthy-lean-mass&quot; id=&quot;markdown-toc-is-there-an-upper-bound-on-healthy-lean-mass&quot;&gt;Is there an upper bound on healthy lean mass?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#summary-of-findings&quot; id=&quot;markdown-toc-summary-of-findings&quot;&gt;Summary of findings&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I looked at all the relevant research papers with at least 100 citations on Google Scholar. Here’s what I found.&lt;/p&gt;

&lt;h2 id=&quot;research-on-body-composition-and-mortality&quot;&gt;Research on body composition and mortality&lt;/h2&gt;

&lt;p&gt;The best data comes from the big Danish follow-up study “Diet, Cancer and Health” which followed 50,000 Danish adults aged 50 to 64 from 1993 to 1997. Two different research papers analyzed the data from this study: &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/full/10.1038/oby.2004.131&quot;&gt;Bigaard et al. (2004)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://www.nature.com/articles/0802976&quot;&gt;Bigaard et al. (2005)&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;. These papers break down BMI into body fat mass index (BFMI) and fat-free mass index (FFMI) (measured using &lt;a href=&quot;https://en.wikipedia.org/wiki/Bioelectrical_impedance_analysis&quot;&gt;bioelectrical impedance&lt;/a&gt;). Just like how BMI equals weight/height&lt;sup&gt;2&lt;/sup&gt;, BFMI equals bodyfat/height&lt;sup&gt;2&lt;/sup&gt;, and FFMI equals fat-free mass (a.k.a. lean mass) / height&lt;sup&gt;2&lt;/sup&gt; (such that BFMI + FFMI = BMI).&lt;/p&gt;

&lt;p&gt;Bigaard et al. (2004) found that mortality monotonically increases with BFMI above 5–6, and gets slightly worse below 5–6. For women, mortality monotonically decreases with FFMI, and for men it monotonically increases up to 19, at which point it starts increasing again. See &lt;a href=&quot;https://onlinelibrary.wiley.com/cms/asset/81cd76fe-6201-4eee-b9fa-e4c5d2807cd4/oby_1042_f1.gif&quot;&gt;Figure 1&lt;/a&gt; (dark line represents men, light line represents women).&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://onlinelibrary.wiley.com/cms/asset/81cd76fe-6201-4eee-b9fa-e4c5d2807cd4/oby_1042_f1.gif&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(BFMI and FFMI were controlled for each other, which fixes the problem that people with more lean mass usually have more fat mass.)&lt;/p&gt;

&lt;p&gt;If we combine the ideal BFMI and FFMI from the Danish study, we get:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;pre&gt;&lt;code&gt;men:   5 BFMI + 19 FFMI = 24 BMI with 21% body fat
women: 7 BFMI + 17 FFMI = 24 BMI with 29% body fat
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;But this probably isn’t right. The &lt;a href=&quot;https://mdickens.me/2024/05/05/healthiest_BMI/#the-big-bmi-studies&quot;&gt;best meta-analyses&lt;/a&gt; on BMI and mortality control for three big confounders: smoking, health conditions, and study follow-up length. All these confounders make low BMIs look unhealthier than they really are. The Bigaard et al. papers controlled for smoking, but not for health conditions (although participants were healthier than average due to selection effects&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;); plus the Danish study only lasted 5 years, and 5-year studies show a bias toward high BMIs.&lt;/p&gt;

&lt;p&gt;In the BMI studies, controlling for these additional confounders reduces the apparent healthiest BMI by 3 or 4 points. And the Danish study found that mortality was minimized at 25 BMI,&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; whereas in reality, &lt;a href=&quot;https://mdickens.me/2024/05/05/healthiest_BMI/&quot;&gt;it’s probably minimized at 20–22 BMI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Undiagnosed health conditions probably decrease both lean mass and fat mass. That means low fat mass and low lean mass are both healthier than observational studies make them look.&lt;/p&gt;

&lt;p&gt;If we naively subtract 4 points from the body-fat mass index,&lt;sup id=&quot;fnref:36&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; we get a new estimate of the healthiest body composition for men and women:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;pre&gt;&lt;code&gt;men:   1 BFMI + 19 FFMI = 20 BMI with 5% body fat
women: 3 BFMI + 17 FFMI = 20 BMI with 15% body fat
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;These numbers are extraordinarily low. But remember that I came up with these numbers by extrapolating what might happen if we controlled for confounders, so the numbers don’t come from actual data and could be pretty far off. My guess is they’re too low—maybe selection bias in the Diet, Cancer and Health Study had a similar effect to controlling for health, and I’ve over-corrected.&lt;/p&gt;

&lt;p&gt;I don’t know about the exact numbers, but I expect that low body fat is good.&lt;/p&gt;

&lt;p&gt;The data showed a trend of increasing mortality above 19 FFMI for men and 17 FFMI for women, but the trend wasn’t strong, so it’s possible that higher FFMIs are healthier (more on this &lt;a href=&quot;#is-there-an-upper-bound-on-healthy-lean-mass&quot;&gt;later&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Other research has shown that ideal BMI increases with age, and the Danish study only included adults aged 50 to 64, so younger people should probably target lower body fat, and older people should aim higher. (It’s possible that younger people actually want to have less lean mass, not less body fat, but that doesn’t sound likely to me.)&lt;/p&gt;

&lt;p&gt;Ideal BMI varies by ethnicity. But that’s probably because different ethnicities distribute body fat differently—e.g., South Asians carry more body fat at a given BMI, which makes BMIs on the higher end look unhealthier for South Asians. I don’t know of any cross-ethnic studies on body fat and mortality, but my best guess is that ideal body fat doesn’t vary much by ethnicity.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.bmj.com/content/362/bmj.k2575.full&quot;&gt;Lee et al. (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;, the second-most useful paper, looked at (predominantly white) males from the long-term Health Professionals Follow-Up Study and found that, controlling for confounders, mortality for men was minimized in the first quintile of fat mass and the third quintile of lean body mass (see &lt;a href=&quot;https://www.bmj.com/highwire/markup/980335/expansion?width=1000&amp;amp;height=500&amp;amp;iframe=true&amp;amp;postprocessors=highwire_figures%2Chighwire_math&quot;&gt;Table 2&lt;/a&gt;). That is, men should aim for low fat mass and average lean mass.&lt;/p&gt;

&lt;p&gt;(Lean mass and mortality had only a weak association except in the first quintile: low lean mass increased risk, but lean mass didn’t clearly affect mortality beyond that.)&lt;/p&gt;

&lt;p&gt;Lee et al. (2018) found that male mortality was minimized at 56 kg of lean mass and 5–21 kg of fat mass.&lt;sup id=&quot;fnref:37&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:37&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; The median of the 1st quintile had a fat mass of 15 kg which corresponds to 21% body fat.&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Lee et al. found that mortality was minimized at a BMI of 24 or so, which suggests that it hasn’t fully adjusted for confounders. Adjusting fat mass index by 3 points to produce a BMI of 21 (assuming a height of 176 cm, which corresponds to a BMI of 23&lt;sup id=&quot;fnref:38&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:38&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; for someone with 56 kg lean mass + 15 kg fat mass) predicts an ideal male body composition of&lt;/p&gt;

&lt;blockquote&gt;
  &lt;pre&gt;&lt;code&gt;2 BFMI + 18 FFMI = 20 BMI with 9% body fat
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;Interestingly, Lee et al. found that if you exclude men with excessively low lean mass, then the mortality-minimizing BMI range is 18.5–20.4, not 20–22 as I &lt;a href=&quot;https://mdickens.me/2024/05/05/healthiest_BMI/&quot;&gt;previously reported&lt;/a&gt;. (The meta-analyses I looked at in my last post didn’t include data on body fat.) And the effect was surprisingly strong—men in the 18.5–20.4 group had a 15% lower mortality rate than the 20.5–22.4 group (see &lt;a href=&quot;https://www.bmj.com/highwire/markup/980333/expansion?width=1000&amp;amp;height=500&amp;amp;iframe=true&amp;amp;postprocessors=highwire_figures%2Chighwire_math&quot;&gt;Table 3&lt;/a&gt;). This suggests two things:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Having a BMI on the low end (18.5–20.4) has large health benefits and large downsides relative to a mid-range BMI (20.5–22.4).&lt;/li&gt;
  &lt;li&gt;The downsides pretty much exclusively come from low lean mass, not low fat mass. If you have low-end BMI with sufficiently high lean mass, you get all the upside and ~none of the downside.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(Note: After excluding people with low lean mass, there were zero men left with BMIs below 18.5, out of a sample of 38,000. You might shoot for a BMI even lower than 18.5, but it looks pretty much impossible.&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;h2 id=&quot;research-on-waist-circumference&quot;&gt;Research on waist circumference&lt;/h2&gt;

&lt;p&gt;A few studies looked at mortality and waist circumference. Waist circumference is often used as a proxy for body fat, and arguably it’s a &lt;em&gt;better&lt;/em&gt; metric than body fat % because visceral fat (abdominal fat that’s distributed around the organs) carries greater health risks than subcutaneous fat (distributed under the skin).&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.ahajournals.org/doi/full/10.1161/CIRCULATIONAHA.107.739714&quot;&gt;Zhang et al. (2008)&lt;/a&gt;&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; looked at American women in the Nurses’ Health Study. Participants had the lowest mortality in the 1st quintile of waist circumference—less than 28 inches or 71 cm, see Table 3. The study also looked at waist:hip ratio and found that the 1st quintile (&amp;lt;0.73) minimized mortality risk.&lt;/p&gt;

&lt;p&gt;Zhang et al. reported a &lt;a href=&quot;https://en.wikipedia.org/wiki/Hazard_ratio&quot;&gt;hazard ratio&lt;/a&gt; of 1.01 for the 28–29 inch waist group for all participants—that is, 28–29 inches was only slightly less healthy than &amp;lt;28 inches. But among never-smokers, the 28–29 inch group had a hazard ratio of 1.31. Reading between the lines, that implies that the healthiest waist circumference for non-smoking women is considerably less than 28 inches (71 cm).&lt;sup id=&quot;fnref:39&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:39&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.sciencedirect.com/science/article/pii/S1279770723021930&quot;&gt;Hu et al. (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt; found that among middle-aged and elderly Chinese individuals, mortality was minimized at a waist circumference of 83–88 cm for men and 79–83 cm for women. This study used a relatively short follow-up (8.5 years), which means the true healthiest waist circumference is probably lower.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://academic.oup.com/aje/article/152/3/264/73227&quot;&gt;Baik et al. (2000)&lt;/a&gt;&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; found that American men had the lowest mortality rate for waist circumferences in the 3rd quintile (36.3–37.9 inches or 92.2–96.3 cm), and waist:hip ratios in the 2nd quintile (0.90–0.91), see &lt;a href=&quot;https://academic.oup.com/view-large/555349&quot;&gt;Table 4&lt;/a&gt;. The authors write that the higher mortality for lean men is probably due to confounding—respiratory disease causes men to lose weight and die sooner.&lt;/p&gt;

&lt;h2 id=&quot;research-on-the-relationship-between-bmi-and-body-fat&quot;&gt;Research on the relationship between BMI and body fat&lt;/h2&gt;

&lt;p&gt;Some studies provide formulas to convert BMI to body fat. We can use those formulas to estimate ideal body fat from ideal BMI. This method doesn’t really work because the healthiest BMI substantially changes if you know someone’s fat mass / lean mass. But I’m going to use the method anyway and see what happens.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Data from the US National Health and Nutrition Examination Survey in &lt;a href=&quot;https://www.sciencedirect.com/science/article/pii/S0002916523239659&quot;&gt;Flegal et al. (2009)&lt;/a&gt;&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; suggest a correspondence between ideal BMI and body fat % (see &lt;a href=&quot;https://www.sciencedirect.com/science/article/pii/S0002916523239659#t3&quot;&gt;Table 3&lt;/a&gt; and &lt;a href=&quot;https://www.sciencedirect.com/science/article/pii/S0002916523239659#t4&quot;&gt;Table 4&lt;/a&gt;):&lt;/p&gt;

    &lt;table&gt;
      &lt;thead&gt;
        &lt;tr&gt;
          &lt;th&gt;demographic&lt;/th&gt;
          &lt;th&gt;body fat&lt;/th&gt;
        &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
        &lt;tr&gt;
          &lt;td&gt;women under 40&lt;/td&gt;
          &lt;td&gt;25–30%&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;women over 60&lt;/td&gt;
          &lt;td&gt;30–35%&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;men under 40&lt;/td&gt;
          &lt;td&gt;15–20%&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
          &lt;td&gt;men over 60&lt;/td&gt;
          &lt;td&gt;20–25%&lt;/td&gt;
        &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;https://www.sciencedirect.com/science/article/abs/pii/S026156141000004X&quot;&gt;Meeuwsen et al. (2010)&lt;/a&gt;&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt; measured body fat in UK adults and came up with a linear formula to predict body fat percentage:&lt;/p&gt;

    &lt;blockquote&gt;
      &lt;pre&gt;&lt;code&gt;women: BF% =  -1.63 + 1.129 * BMI + 0.140 * age
men:   BF% = -13.51 + 1.129 * BMI + 0.140 * age
&lt;/code&gt;&lt;/pre&gt;
    &lt;/blockquote&gt;

    &lt;p&gt;This formula predicts that, assuming you want a BMI of 21 at 30 years old, the healthiest body fat is 26% for women and 14% for men. At age 60, that rises to 30% for women and 19% for men.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3183503/&quot;&gt;Mills et al. (2007)&lt;/a&gt;&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt; found a BMI/body-fat correlation that suggests an ideal male body fat of 4% to 9%. But it also says a white male with a BMI of 18 has –1% body fat so I suspect it’s pretty inaccurate on the low end.&lt;sup id=&quot;fnref:40&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:40&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3766672/&quot;&gt;Ranasinghe et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt; derived formulas to estimate body fat percentage in Sri Lankan adults:&lt;/p&gt;

    &lt;blockquote&gt;
      &lt;pre&gt;&lt;code&gt;women: BF% =  3.819 + 0.918 * BMI + 0.153 * age
men:   BF% = -9.662 + 1.114 * BMI + 0.139 * age
&lt;/code&gt;&lt;/pre&gt;
    &lt;/blockquote&gt;

    &lt;p&gt;These formulas predict an ideal body fat of 28% for women and 18% for men at age 30.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;good-research-that-nonetheless-didnt-answer-my-question&quot;&gt;Good research that nonetheless didn’t answer my question&lt;/h2&gt;

&lt;p&gt;Some other studies looked at the association between body fat/waist circumference and mortality, but only focused on the effects of overweightness/obesity, which doesn’t tell us the healthiest body composition.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.nature.com/articles/0802976&quot;&gt;Bigaard et al. (2005)&lt;/a&gt;&lt;sup id=&quot;fnref:2:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;, using the Danish data set discussed previously, found that waist circumference predicted mortality even when controlling for BMI. It also found that, if you control for waist circumference, mortality monotonically decreases with both body fat mass index and fat-free mass index. (I wish I knew what to do with this information—I don’t know how to increase body fat without also increasing waist circumference.&lt;sup id=&quot;fnref:41&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:41&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.jacc.org/doi/abs/10.1016/j.jacc.2013.06.027&quot;&gt;Britton et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt; found that excess body fat was associated with elevated mortality risk, but did not provide granularity on the low end of body fat.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.mayoclinicproceedings.org/article/S0025-6196(13)01040-9/abstract&quot;&gt;Cerhan et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt;, a pooled analysis of 11 cohort studies with a total of 650,000 white participants, found that mortality monotonically increased with waist circumference 90+ cm for men and 70+ cm for women. But it did not look at waist circumferences smaller than 90 cm for men or 70 cm for women.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://onlinelibrary.wiley.com/doi/full/10.1002/oby.22423&quot;&gt;Chen et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt; found greater mortality for waist circumferences of 90+ cm for Chinese men and 80+ cm for Chinese women, but did not provide granularity beyond that.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/775594&quot;&gt;Jacobs et al. (2010)&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;26&lt;/a&gt;&lt;/sup&gt; found that among 100,000 (mostly white) Americans over a 10-year period, lower waist circumference was associated with reduced mortality, but it did not present data on waist circumferences smaller than 90 cm for men or 75 cm for women.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://dom-pubs.onlinelibrary.wiley.com/doi/abs/10.1111/dom.13050&quot;&gt;Lee et al. (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt; found that body fat predicted mortality more strongly than BMI, and visceral fat more strongly still. It found that people in the bottom third by fat mass had the lowest mortality, as did the people in the bottom third by visceral-fat-to-subcutaneous-fat ratio.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.nature.com/articles/0801787&quot;&gt;Visscher et al. (2001)&lt;/a&gt;&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;28&lt;/a&gt;&lt;/sup&gt; found that male waist circumferences over 94 cm were associated with increased mortality, and failed to find a trend among women.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;is-there-an-upper-bound-on-healthy-lean-mass&quot;&gt;Is there an upper bound on healthy lean mass?&lt;/h2&gt;

&lt;p&gt;We know high BMIs are unhealthy, and we know that the main harms to health come from excess body fat. Does excess lean mass increase mortality risk? Or is it better to be as big and lean as possible?&lt;/p&gt;

&lt;p&gt;The Danish Diet, Cancer and Health Study found that, after controlling for body fat, mortality risk decreases with lean mass up to a fat-free mass index (FFMI) of 17 for women and 19 for men, and starts increasing again above that point. The increasing trend above 19/17 is not statistically significant: after adjusting for both body-fat mass index (BFMI) and smoking, the upper portion of the FFMI curve has a slope of 1.03 with a 95% confidence interval of [0.93, 1.13] (see Table 4 from &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/full/10.1038/oby.2004.131&quot;&gt;Bigaard et al. (2004)&lt;/a&gt;&lt;sup id=&quot;fnref:3:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;This suggests that there’s no harm to having a FFMI above 17 for women or 19 for men, but there’s no benefit, either.&lt;/p&gt;

&lt;p&gt;Alternatively, we can look at the relationship between resistance training and mortality because resistance training is closely associated with lean mass. A 2022 meta-analysis by &lt;a href=&quot;https://bjsm.bmj.com/content/56/13/755&quot;&gt;Momma et al.&lt;/a&gt;&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;29&lt;/a&gt;&lt;/sup&gt; found that muscle-strengthening activity was associated with reduced mortality risk up to about 60 minutes per week, but with marginal resistance training above 60 minutes/week showing increasing mortality, and 140 minutes per week showing the same mortality risk as 0 minutes. The meta-analysis did not control for BMI, which I suspected could explain the increase in mortality, but one large cohort study (&lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7417019/&quot;&gt;Patel et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;30&lt;/a&gt;&lt;/sup&gt;) did control for BMI (among other factors) and also found that resistance training was associated with increased mortality above 60 minutes or so.&lt;/p&gt;

&lt;p&gt;I’ve seen some people dismiss this finding on the basis that resistance training is well-known to improve many health indicators like blood pressure and bone density. That’s true, but from what I’ve seen, the research primarily looks at low-dose resistance training, so we can’t say based on that research that increases in training volume monotonically improve health at high doses.&lt;/p&gt;

&lt;p&gt;Why might muscle-strengthening activity increase mortality risk? I have seen two proposed mechanisms:&lt;/p&gt;

&lt;p&gt;First, &lt;a href=&quot;https://bjsm.bmj.com/content/47/6/393.full&quot;&gt;Miyachi (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:28&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:28&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;31&lt;/a&gt;&lt;/sup&gt; found that resistance training increases arterial stiffness, which &lt;a href=&quot;https://en.wikipedia.org/wiki/Arterial_stiffness&quot;&gt;increases heart disease risk&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Second, &lt;a href=&quot;https://www.strongerbyscience.com/research-spotlight-lifting-longevity/&quot;&gt;Nuckols (2022)&lt;/a&gt;&lt;sup id=&quot;fnref:29&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:29&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;32&lt;/a&gt;&lt;/sup&gt; writes:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;The studies in [Momma et al. (2022)] mostly used older subjects. It’s entirely possible that the optimal dose of resistance training for older adults is a lot lower than the optimal dose of resistance training for younger adults. For example, oxidative stress and generalized inflammation likely contribute to biological aging, and older adults have higher levels of oxidative stress and generalized inflammation. Resistance training causes oxidative stress and inflammation in a dose-dependent manner, but this is generally a good thing – those stressors are triggers for training-induced adaptations, and they also trigger your body to ramp up endogenous antioxidant production so that you can better handle future stressors (resulting in net reductions in inflammation and oxidative stress at rest). However, excessive training doses can induce too much oxidative stress and inflammation, setting the stage for a variety of deleterious outcomes (which we tend to collectively refer to as “overtraining”. It’s entirely possible – likely, even – that the threshold between productive training-induced stress and unproductive training-induced stress is considerably lower in older adults than younger adults.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;summary-of-findings&quot;&gt;Summary of findings&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;None of the studies fully controlled for confounders, so we have to make a judgment call as to what the results would look like if we &lt;em&gt;did&lt;/em&gt; fully control for them.&lt;/li&gt;
  &lt;li&gt;Two big studies—the Danish Diet, Cancer and Health Study and the American Health Professionals Follow-Up Study—provide data on the association between mortality and lean mass / fat mass. If we extrapolate to what results we might see if we controlled for all confounders, these studies predict that men minimize mortality risk at 5–10% body fat and women minimize mortality risk at 15–20% body fat, as long as they have adequate lean mass.&lt;/li&gt;
  &lt;li&gt;Some research suggests that people in the lowest quintile of waist circumference have the lowest mortality risk, but we don’t know exactly what waist circumference minimizes mortality risk.&lt;/li&gt;
  &lt;li&gt;A great deal of research shows that on the higher end, higher body fat / bigger waist circumference is associated with increased mortality risk.&lt;/li&gt;
  &lt;li&gt;Higher lean body mass decreases mortality risk up to a fat-free mass index of 19 for men and 17 for women. Excess lean body mass above that point might increase mortality risk, but it’s not clear.&lt;/li&gt;
&lt;/ol&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically we want to look at lean mass, not muscle mass. Lean mass refers to &lt;em&gt;any&lt;/em&gt; body mass that isn’t fat, which includes muscles, bones, organs, etc. But the main way to gain lean mass is by building muscle. Resistance training does also increase bone mass, but much more slowly than muscle mass. I don’t know of any way to increase your organ mass, and even if you could, I don’t know that you would want to. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Spanos, C., Bretherton, I., Zajac, J. D., &amp;amp; Cheung, A. S (2020). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7061235/&quot;&gt;Effects of gender-affirming hormone therapy on insulin resistance and body composition in transgender individuals: a systematic review.&lt;/a&gt; See &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7061235/table/T1/?report=objectonly&quot;&gt;Table 1&lt;/a&gt;. &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bigaard, J., Frederiksen, K., Tjønneland, A., Thomsen, B. L., Overvad, K., Heitmann, B. L., &amp;amp; Sørensen, T. I. (2004). &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/full/10.1038/oby.2004.131&quot;&gt;Body fat and fat-free mass and all-cause mortality.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:3:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bigaard, J., Frederiksen, K., Tjønneland, A., Thomsen, B. L., Overvad, K., Heitmann, B. L., &amp;amp; Sørensen, T. I. A. (2005). &lt;a href=&quot;https://www.nature.com/articles/0802976&quot;&gt;Waist circumference and body composition in relation to all-cause mortality in middle-aged men and women.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:2:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Study participants died at only about half the rate of the general population, see Bigaard et al. (2005)&lt;sup id=&quot;fnref:3:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Looking purely at BMI, the study found that mortality was minimized at 25. Looking at the combined healthiest BFMI and FFMI, mortality was minimized at 24 for men and 23 for women. I believe the discrepancy comes from the fact that people with low body fat often have insufficient lean mass. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:36&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Instead of subtracting 4 points from FFMI, I could subtract 2 points from BFMI and 2 points from FFMI. That seems worse to me because I would expect that few people with a FFMI of 19/17 have health conditions that reduce their lean mass—if they did, their FFMI would be considerably lower. So the observation that an FFMI of 19/17 minimizes mortality shouldn’t be confounded by health conditions. &lt;a href=&quot;#fnref:36&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lee, D. H., Keum, N., Hu, F. B., Orav, E. J., Rimm, E. B., Willett, W. C., &amp;amp; Giovannucci, E. L. (2018). &lt;a href=&quot;https://www.bmj.com/content/362/bmj.k2575.full&quot;&gt;Predicted lean body mass, fat mass, and all cause and cause specific mortality in men: prospective US cohort study.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:37&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;According to their “model 2”, which controls for various factors but does not control for health conditions. &lt;a href=&quot;#fnref:37&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Figure 1 shows the lowest mortality at a fat mass of 21 kg, which corresponds to a surprisingly-high 27% body fat. I believe this is an artifact of the model used to generate Figure 1. Figure 1 fits the data to a cubic spline model which smooths out the bumpiness in the quintiles. The 1st quintile had the lowest mortality rate but its 95% CI overlapped with the 2nd and 3rd quintiles, so the cubic spline model smoothed these out and ended up predicting that mortality is minimized in the 3rd quintile. Thanks to co-author Edward Giovannucci for this explanation. &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:38&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I use 23 instead of 24 because my analysis of Bigaard et al. (2004) found that splitting out lean mass and fat mass reduced apparent healthiest BMI by 1 point. &lt;a href=&quot;#fnref:38&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For me personally, if I somehow dropped by body fat to 0% without losing any lean mass, I’d still have a BMI of 20. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Zhang, C., Rexrode, K. M., Van Dam, R. M., Li, T. Y., &amp;amp; Hu, F. B. (2008). &lt;a href=&quot;https://www.ahajournals.org/doi/full/10.1161/CIRCULATIONAHA.107.739714&quot;&gt;Abdominal obesity and the risk of all-cause, cardiovascular, and cancer mortality: sixteen years of follow-up in US women.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:39&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;My reasoning: If the 28–29 group looks only slightly less healthy than the &amp;lt;28 group for all participants (including smokers), and if the mortality curve follows an U shape, then the nadir of the curve should be just below 28 inches. If the 28–29 group looks substantially less healthy, then the nadir must be considerably lower than 28 inches because that way most of the function’s mass near the nadir occurs below 28 inches. &lt;a href=&quot;#fnref:39&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hu, H., Wang, J., Han, X., Li, Y., Wang, F., Yuan, J., Miao, X., Yang, H., &amp;amp; He, M. (2018). &lt;a href=&quot;https://www.sciencedirect.com/science/article/pii/S1279770723021930&quot;&gt;BMI, waist circumference and all-cause mortality in a middle-aged and elderly Chinese population.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Baik, I., Ascherio, A., Rimm, E. B., Giovannucci, E., Spiegelman, D., Stampfer, M. J., &amp;amp; Willett, W. C. (2000). &lt;a href=&quot;https://academic.oup.com/aje/article/152/3/264/73227&quot;&gt;Adiposity and mortality in men.&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Flegal, K. M., Shepherd, J. A., Looker, A. C., Graubard, B. I., Borrud, L. G., Ogden, C. L., . &amp;amp; Schenker, N (2009). &lt;a href=&quot;https://www.sciencedirect.com/science/article/pii/S0002916523239659&quot;&gt;Comparisons of percentage body fat, body mass index, waist circumference, and waist-stature ratio in adults.&lt;/a&gt; &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Meeuwsen, S., Horgan, G. W., &amp;amp; Elia, M (2010). &lt;a href=&quot;https://www.sciencedirect.com/science/article/abs/pii/S026156141000004X&quot;&gt;The relationship between BMI and percent body fat, measured by bioelectrical impedance, in a large adult sample is curvilinear and influenced by age and sex.&lt;/a&gt; &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mills, T. C., Gallagher, D., Wang, J., &amp;amp; Heshka, S. (2007). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3183503/&quot;&gt;Modelling the relationship between body fat and the BMI.&lt;/a&gt; &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:40&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Then again, I would need to have –9% body fat to get my BMI down to 18 without losing lean mass, so maybe they’re on to something. &lt;a href=&quot;#fnref:40&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ranasinghe, C., Gamage, P., Katulanda, P., Andraweera, N., Thilakarathne, S., &amp;amp; Tharanga, P (2013). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3766672/&quot;&gt;Relationship between body mass index (BMI) and body fat percentage, estimated by bioelectrical impedance, in a group of Sri Lankan adults: a cross sectional study.&lt;/a&gt; &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:41&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’ve seen claims that low-intensity exercise disproportionately burns visceral fat (&lt;a href=&quot;https://doi.org/10.14814/phy2.15853&quot;&gt;Brobakken et al. (2023)&lt;/a&gt;&lt;sup id=&quot;fnref:42&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:42&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;33&lt;/a&gt;&lt;/sup&gt;), and cortisol (the “stress hormone”) disproportionately &lt;em&gt;adds&lt;/em&gt; visceral fat. But I think even if you exercise a lot and minimize stress, you still don’t want to have too much body fat. &lt;a href=&quot;#fnref:41&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Britton, K. A., Massaro, J. M., Murabito, J. M., Kreger, B. E., Hoffmann, U., &amp;amp; Fox, C. S. (2013). &lt;a href=&quot;https://www.jacc.org/doi/abs/10.1016/j.jacc.2013.06.027&quot;&gt;Body fat distribution, incident cardiovascular disease, cancer, and all-cause mortality.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Cerhan, J. R., Moore, S. C., Jacobs, E. J., Kitahara, C. M., Rosenberg, P. S., Adami, H. O., Ebbert, J. O., English, D. R., Gapstur, S. M., Giles, G. G., &amp;amp; Horn-Ross, P. L. (2014). &lt;a href=&quot;https://www.sciencedirect.com/science/article/abs/pii/S0025619613010409&quot;&gt;A pooled analysis of waist circumference and mortality in 650,000 adults.&lt;/a&gt; &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Chen, Y., Yang, Y., Jiang, H., Liang, X., Wang, Y., &amp;amp; Lu, W. (2019). &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/full/10.1002/oby.22423&quot;&gt;Associations of BMI and waist circumference with all-cause mortality: a 22-Year cohort study.&lt;/a&gt; &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jacobs, E. J., Newton, C. C., Wang, Y., Patel, A. V., McCullough, M. L., Campbell, P. Thun, M. J., &amp;amp; Gapstur, S. M. (2010). &lt;a href=&quot;https://jamanetwork.com/journals/jamainternalmedicine/article-abstract/775594&quot;&gt;Waist circumference and all-cause mortality in a large US cohort.&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lee, S. W., Son, J. Y., Kim, J. M., Hwang, S. S., Han, J. S., &amp;amp; Heo, N. J. (2018). &lt;a href=&quot;https://dom-pubs.onlinelibrary.wiley.com/doi/abs/10.1111/dom.13050&quot;&gt;Body fat distribution is more predictive of all-cause mortality than overall adiposity.&lt;/a&gt; &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Visscher, T. L. S., Seidell, J. C., Molarius, A., van der Kuip, D., Hofman, A., &amp;amp; Witteman, J. C. M. (2001). &lt;a href=&quot;https://www.nature.com/articles/0801787&quot;&gt;A comparison of body mass index, waist–hip ratio and waist circumference as predictors of all-cause mortality among the elderly: the Rotterdam study.&lt;/a&gt; &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Momma, H., Kawakami, R., Honda, T., &amp;amp; Sawada, S. S (2022). &lt;a href=&quot;https://bjsm.bmj.com/content/56/13/755&quot;&gt;Muscle-strengthening activities are associated with lower risk and mortality in major non-communicable diseases: a systematic review and meta-analysis of cohort studies.&lt;/a&gt; &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Patel, A. V., Hodge, J. M., Rees-Punia, E., Teras, L. R., Campbell, P. T., &amp;amp; Gapstur, S. M (2020). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7417019/&quot;&gt;Peer Reviewed: Relationship Between Muscle-Strengthening Activity and Cause-Specific Mortality in a Large US Cohort.&lt;/a&gt; &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:28&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Miyachi, M (2013). &lt;a href=&quot;https://bjsm.bmj.com/content/47/6/393.full&quot;&gt;Effects of resistance training on arterial stiffness: a meta-analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:28&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:29&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Nuckols, G. (2022). &lt;a href=&quot;https://www.strongerbyscience.com/research-spotlight-lifting-longevity/&quot;&gt;What is the optimal dose of resistance training for longevity?&lt;/a&gt; &lt;a href=&quot;#fnref:29&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:42&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Brobakken, M. F., Krogsæter, I., Helgerud, J., Wang, E., &amp;amp; Hoff, J (2023). &lt;a href=&quot;https://doi.org/10.14814/phy2.15853&quot;&gt;Abdominal aerobic endurance exercise reveals spot reduction exists: A randomized controlled trial.&lt;/a&gt; &lt;a href=&quot;#fnref:42&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>What's the Healthiest BMI?</title>
				<pubDate>Sun, 05 May 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/05/05/healthiest_BMI/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/05/05/healthiest_BMI/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;strong&gt;TLDR: 20 to 22.&lt;/strong&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;Confidence&lt;/a&gt;: Somewhat likely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Last updated 2024-05-18.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Health organizations such as the American Heart Association recommend a body-mass index (BMI) of 18.5 to 25. But that’s a wide range. Surely we can say something more specific, right? I don’t want to know what’s &lt;em&gt;acceptable&lt;/em&gt;, I want to know what’s &lt;em&gt;optimal&lt;/em&gt;. What’s the exact best BMI for health?&lt;sup id=&quot;fnref:32&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:32&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;I couldn’t find an answer with a simple web search, which means now I have to write a post about it.&lt;sup id=&quot;fnref:31&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:31&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#the-big-bmi-studies&quot; id=&quot;markdown-toc-the-big-bmi-studies&quot;&gt;The big BMI studies&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#do-studies-still-overestimate-ideal-bmi&quot; id=&quot;markdown-toc-do-studies-still-overestimate-ideal-bmi&quot;&gt;Do studies still overestimate ideal BMI?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#details-age-sex-ethnicity-and-cause-specific-mortality&quot; id=&quot;markdown-toc-details-age-sex-ethnicity-and-cause-specific-mortality&quot;&gt;Details: Age, sex, ethnicity, and cause-specific mortality&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#exercise&quot; id=&quot;markdown-toc-exercise&quot;&gt;Exercise&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#a-final-bit-of-evidence&quot; id=&quot;markdown-toc-a-final-bit-of-evidence&quot;&gt;A final bit of evidence&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#changelog&quot; id=&quot;markdown-toc-changelog&quot;&gt;Changelog&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-big-bmi-studies&quot;&gt;The big BMI studies&lt;/h2&gt;

&lt;p&gt;I read&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; two big meta-analyses on BMI and mortality:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Global BMI Mortality Collaboration (2016). &lt;a href=&quot;https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(16)30175-1/fulltext&quot;&gt;Body-mass index and all-cause mortality: individual-participant-data meta-analysis of 239 prospective studies in four continents.&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Aune, Dagfinn; Sen, Abhijit; Prasad, Manya; Norat, Teresa; Janszky, Imre; Tonstad, Serena; Romundstad, Pål; Vatten, Lars J (2016). &lt;a href=&quot;https://www.bmj.com/content/353/bmj.i2156&quot;&gt;BMI and all cause mortality: systematic review and non-linear dose-response meta-analysis of 230 cohort studies with 3.74 million deaths among 30.3 million participants.&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We have absolute mountains of data on the association between BMI and mortality. In spite of that, it’s hard to say what BMI minimizes mortality risk because it depends on how you interpret the data.&lt;/p&gt;

&lt;p&gt;The raw mortality data says the healthiest BMI is 25—right on the cusp of the “overweight” range.&lt;/p&gt;

&lt;p&gt;But that’s wrong because confounding variables make low BMI look worse than it really is. Smokers tend to weigh less, and various health problems cause people to lose weight.&lt;/p&gt;

&lt;p&gt;If you adjust for smoking and health, and you also limit to studies with ≥15-year follow-ups to avoid confounding by undiagnosed illnesses,&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; &lt;strong&gt;the ideal BMI is 20–22&lt;/strong&gt;. Both meta-analyses agree on this conclusion (see &lt;a href=&quot;https://www.thelancet.com/action/showFullTableHTML?isHtml=true&amp;amp;tableId=tbl3&amp;amp;pii=S0140-6736%2816%2930175-1&quot;&gt;Table 3&lt;/a&gt; and &lt;a href=&quot;https://www.thelancet.com/cms/10.1016/S0140-6736(16)30175-1/attachment/e2a32d35-aeae-445a-8feb-fe6d9c03da38/mmc1.pdf&quot;&gt;Appendix eTable 8&lt;/a&gt; in the first paper, and see the &lt;a href=&quot;https://www.bmj.com/content/353/bmj.i2156&quot;&gt;abstract&lt;/a&gt; in the second paper).&lt;/p&gt;

&lt;p&gt;Here is a table of &lt;a href=&quot;https://en.wikipedia.org/wiki/Hazard_ratio&quot;&gt;hazard ratios&lt;/a&gt; (i.e., mortality rates relative to the 22.5–25 BMI group), taken from the first meta-analysis:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;BMI&lt;/td&gt;
      &lt;td&gt;18.5–20&lt;/td&gt;
      &lt;td&gt;20–22.5&lt;/td&gt;
      &lt;td&gt;22.5–25&lt;/td&gt;
      &lt;td&gt;25–27.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;hazard ratio&lt;/td&gt;
      &lt;td&gt;1.13&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
      &lt;td&gt;1.07&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(Berrington de Gonzalez et al. (2010)&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;, a smaller (but still large) meta-analysis of 1.5 million white individuals, provides mortality data to a granularity of 1 point of BMI in its &lt;a href=&quot;https://www.nejm.org/doi/suppl/10.1056/NEJMoa1000367/suppl_file/nejmoa1000367_appendix.pdf&quot;&gt;Supplemental Appendix&lt;/a&gt;, Figure 1. It finds a uniform mortality rate between 20 and 25.)&lt;/p&gt;

&lt;p&gt;Mortality appears to increase slowly above 25 and more quickly below 20. If we interpolate the shape of the mortality curve, it looks like the exact ideal BMI just under 23.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; But I think that’s too high, as I will explain in the next section.&lt;/p&gt;

&lt;h2 id=&quot;do-studies-still-overestimate-ideal-bmi&quot;&gt;Do studies still overestimate ideal BMI?&lt;/h2&gt;

&lt;p&gt;I have reason to suspect that these meta-analyses somewhat overestimate the ideal BMI even after controlling for smoking, health, and study follow-up time. The Global BMI Mortality Collaboration’s &lt;a href=&quot;https://www.thelancet.com/cms/10.1016/S0140-6736(16)30175-1/attachment/e2a32d35-aeae-445a-8feb-fe6d9c03da38/mmc1.pdf&quot;&gt;Supplemental Appendix&lt;/a&gt; has a nice chart showing what happens when you successively adjust for confounders:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/BMI-HR.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The Y axis shows mortality rate. Notice how the un-adjusted mortality rate for the 20–22.5 group starts out considerably higher than for the 22.5–25 group, and (almost) every time you adjust for a confounder, its mortality rate gets lower. By the end, the mortality rates for 20–22.5 and for 22.5–25 are nearly identical.&lt;/p&gt;

&lt;p&gt;(More generally, as you adjust for more things, all BMI groups on the low end look better and all groups on the high end look worse.)&lt;/p&gt;

&lt;p&gt;It seems likely that this meta-analysis didn’t adjust for every possible confounder. Whenever we make an adjustment, the lower BMIs look better. So presumably, if we adjusted for the as-yet-undiscovered cofounders, the ideal BMI would look lower than what these meta-analyses report.&lt;/p&gt;

&lt;p&gt;And indeed, Aune et al. (2016) found that BMIs in the 20–22.5 range looked healthier than 22.5–25 when restricted to studies with a ≥20-year follow-up (see &lt;a href=&quot;https://www.bmj.com/content/bmj/suppl/2016/05/04/bmj.i2156.DC1/aund030215.ww2.pdf&quot;&gt;Appendix 2&lt;/a&gt;, Table E). Global BMI Mortality Collaboration (2016) tried restricting to &amp;gt;5-year follow-ups (in the green and black points shown in eFigure 6 above), but it appears that 5 years is not enough to fully eliminate confounding by undiagnosed diseases.&lt;/p&gt;

&lt;p&gt;Aune et al. (2016)’s Table E reported hazard ratios by BMI with ≥20-year follow-up:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;BMI&lt;/td&gt;
      &lt;td&gt;17.5&lt;/td&gt;
      &lt;td&gt;20&lt;/td&gt;
      &lt;td&gt;22&lt;/td&gt;
      &lt;td&gt;23&lt;/td&gt;
      &lt;td&gt;24&lt;/td&gt;
      &lt;td&gt;25&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;hazard ratio&lt;/td&gt;
      &lt;td&gt;1.06&lt;/td&gt;
      &lt;td&gt;0.99&lt;/td&gt;
      &lt;td&gt;0.99&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
      &lt;td&gt;1.02&lt;/td&gt;
      &lt;td&gt;1.06&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The &lt;a href=&quot;https://www.nejm.org/doi/full/10.1056/NEJMoa1000367&quot;&gt;NCI Cohort Consortium meta-analysis&lt;/a&gt;&lt;sup id=&quot;fnref:3:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; found that both the 18.5–20 and 20–22.5 groups have lower mortality than 22.5–25 if you only look at studies with ≥15-year follow-ups—both the low-BMI groups had hazard ratios of 0.92 (see &lt;a href=&quot;https://www.nejm.org/cms/10.1056/NEJMoa1000367/asset/525efabe-18be-42e0-aa2f-b527c165687e/assets/images/large/nejmoa1000367_t2.jpg&quot;&gt;Table 2&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Before I said the ideal BMI was around 23. If we extrapolate the pattern of adjustments from Global BMI Mortality Collaboration’s eFigure 6, or look at Aune et al. (2016)’s 20-year follow-up data, it looks like the ideal BMI is more like 21. (But really, we could just as well say that the ideal BMI is anywhere from 20 to 22 because mortality doesn’t detectably vary within that range.)&lt;/p&gt;

&lt;h2 id=&quot;details-age-sex-ethnicity-and-cause-specific-mortality&quot;&gt;Details: Age, sex, ethnicity, and cause-specific mortality&lt;/h2&gt;

&lt;p&gt;The Global BMI Mortality Collaboration study broke down mortality by age, sex, and cause of death (see &lt;a href=&quot;https://www.thelancet.com/cms/10.1016/S0140-6736(16)30175-1/attachment/e2a32d35-aeae-445a-8feb-fe6d9c03da38/mmc1.pdf&quot;&gt;Supplemental Appendix&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Ideal BMI (before adjusting for confounders) goes up with age:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Age&lt;/th&gt;
      &lt;th&gt;Ideal BMI&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;35–49&lt;/td&gt;
      &lt;td&gt;20–22.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;50–69&lt;/td&gt;
      &lt;td&gt;22.5–25&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;70–89&lt;/td&gt;
      &lt;td&gt;25–27.5&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Ideal BMI does not vary by sex, but males are more sensitive to BMI than females. That is, females can tolerate greater deviations from the healthy BMI range without as much increase in mortality risk.&lt;/p&gt;

&lt;p&gt;As far as I could find, there is no research whatsoever on the relationship between BMI and mortality for transgender people. Presumably, trans individuals should also target a BMI in the 20 to 22 range, but I can’t empirically confirm that.&lt;/p&gt;

&lt;p&gt;There’s mixed evidence on BMI and ethnicity:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;In 2004, the World Health Organization &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/14726171/&quot;&gt;suggested&lt;/a&gt;&lt;sup id=&quot;fnref:28&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:28&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; that most Asian sub-populations should use lower BMI cutoffs to define overweightness and obesity. A &lt;a href=&quot;https://www.researchgate.net/profile/Vanisha-Nambiar/publication/367462971_Consensus_Statement_for_Diagnosis_of_Obesity_Abdominal_Obesity_and_the_Metabolic_Syndrome_for_Asian_Indians_and_Recommendations_for_Physical_Activity_Medical_and_Surgical_Management/links/63d35ba6c97bd76a823c820e/Consensus-Statement-for-Diagnosis-of-Obesity-Abdominal-Obesity-and-the-Metabolic-Syndrome-for-Asian-Indians-and-Recommendations-for-Physical-Activity-Medical-and-Surgical-Management.pdf&quot;&gt;consensus statement on Asian Indians&lt;/a&gt;&lt;sup id=&quot;fnref:29&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:29&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; (the ethnicity that appears most sensitive to high BMI) proposed defining “normal” BMI for Indians as 18–23, overweight as &amp;gt;23, and obese as &amp;gt;25.&lt;/li&gt;
  &lt;li&gt;A large UK study&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; found that different ethnicities had different mortality rates on the low and high ends of BMI, but not in the middle, i.e., the healthiest BMI did not vary by ethnicity. (See Table S2.8 and Figure S2.7 in the &lt;a href=&quot;https://www.thelancet.com/cms/10.1016/S2213-8587(18)30288-2/attachment/5f0b20fe-7932-4454-bdab-ee66e51cdf4e/mmc1.pdf&quot;&gt;Supplemental Appendix&lt;/a&gt;.)&lt;sup id=&quot;fnref:30&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; This is consistent with the consensus statements—the cutoffs for overweight/obese should probably vary by ethnicity—but it also suggests that &lt;em&gt;ideal&lt;/em&gt; BMI does not vary much.&lt;/li&gt;
  &lt;li&gt;There is theoretical reason to expect ideal BMI to depend on ethnicity. Namely, different ethnicities tend to distribute body fat differently, with Asians carrying more body fat at a given BMI than the average human, and blacks carrying slightly less. (But diabetes risk appears to increase slightly more rapidly with BMI for blacks than for whites, which contradicts this.&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We simply don’t have enough evidence to draw strong conclusions about how ethnicity affects ideal BMI. As best I can tell, all ethnicities should target a BMI in the 20–22 range.&lt;/p&gt;

&lt;p&gt;Ideal BMI varies by cause of death. This table (adapted from Global BMI Mortality Collaboration’s &lt;a href=&quot;https://www.thelancet.com/cms/10.1016/S0140-6736(16)30175-1/attachment/e2a32d35-aeae-445a-8feb-fe6d9c03da38/mmc1.pdf&quot;&gt;Supplemental Appendix&lt;/a&gt;, eTable 15) shows the hazard ratios for four BMI ranges broken down by some of the most common causes of death.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;18.5–20&lt;/th&gt;
      &lt;th&gt;20–22.5&lt;/th&gt;
      &lt;th&gt;22.5–25&lt;/th&gt;
      &lt;th&gt;25–27.5&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;coronary heart disease&lt;/td&gt;
      &lt;td&gt;0.95&lt;/td&gt;
      &lt;td&gt;0.89&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
      &lt;td&gt;1.18&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;other cardiovascular disease&lt;/td&gt;
      &lt;td&gt;1.14&lt;/td&gt;
      &lt;td&gt;0.98&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
      &lt;td&gt;1.11&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;stroke&lt;/td&gt;
      &lt;td&gt;1.15&lt;/td&gt;
      &lt;td&gt;1.01&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
      &lt;td&gt;1.05&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;respiratory disease&lt;/td&gt;
      &lt;td&gt;1.73&lt;/td&gt;
      &lt;td&gt;1.22&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;cancer&lt;/td&gt;
      &lt;td&gt;1.01&lt;/td&gt;
      &lt;td&gt;0.96&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
      &lt;td&gt;1.05&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The second meta-analysis, &lt;a href=&quot;https://www.bmj.com/content/353/bmj.i2156&quot;&gt;Aune et al. (2016)&lt;/a&gt;, claimed “weight loss can precede the diagnosis of some neurological and respiratory diseases by as much as 10-15 years.” The effect discussed in the previous section, where longer follow-ups make lower BMIs look better, appears primarily driven by neurological and respiratory disease.&lt;/p&gt;

&lt;h2 id=&quot;exercise&quot;&gt;Exercise&lt;/h2&gt;

&lt;p&gt;Exercise has two major confounding effects:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Much of the harm of a high BMI comes from a living sedentary lifestyle. Controlling for cardiorespiratory fitness makes BMI look less important, especially at high BMIs.&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Resistance training increases muscle mass, which raises BMI but not in a harmful way. If you control for resistance training, that should lower the apparent ideal BMI for low-muscle individuals while raising it for high-muscle individuals.&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(I thought exercise might have a third effect: exercise disproportionately reduces respiratory disease, which makes low BMI look healthier. This turned out to be true, but adjusting for it only reduces the hazard ratio for the 20–22.5 group by 1%.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;In a meta-analysis, &lt;a href=&quot;http://dx.doi.org/10.1016/j.pcad.2013.09.002&quot;&gt;Barry et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:19:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; found that physical activity substantially reduces the negative health effects of overweightness and obesity, but did not look at how physical activity affects individuals with a BMI of less than 25.&lt;/p&gt;

&lt;p&gt;Is exercise more helpful for overweight than for underweight individuals? (If so, that could raise the ideal BMI for physically active people.) Or is it equally good on both ends of the BMI spectrum? (In which case the ideal BMI wouldn’t change.)&lt;/p&gt;

&lt;p&gt;I found three studies which suggest that it’s the latter: exercise expands the healthy BMI range without obviously changing the ideal BMI.&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;While we’re on the subject of exercise, I should mention that exercise matters more for health than BMI, especially if you already fall within the “normal” BMI range. Exercise &lt;strong&gt;halves&lt;/strong&gt; your mortality rate&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; (for the collective “you”; individual results may vary). Moving your BMI from 25 to 22 only decreases your mortality rate by around 5%.&lt;/p&gt;

&lt;p&gt;(Reducing BMI from 25 to 22 has an effect size of 0.5. Going from sedentary to physically active (as defined by the &lt;a href=&quot;https://www.cdc.gov/physicalactivity/basics/adults/index.htm&quot;&gt;physical activity guidelines&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;) has an effect size of 17.)&lt;/p&gt;

&lt;h2 id=&quot;a-final-bit-of-evidence&quot;&gt;A final bit of evidence&lt;/h2&gt;

&lt;p&gt;In &lt;a href=&quot;https://www.amazon.com/Eat-Drink-Be-Healthy-Harvard/dp/1501164775&quot;&gt;Eat, Drink, and Be Healthy&lt;/a&gt;, Walter Willett—who co-authored the Global BMI Mortality Collaboration meta-analysis—had this to say about the ideal BMI.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;The 2015–2020 &lt;em&gt;Dietary Guidelines for Americans&lt;/em&gt; sets healthy weights as those corresponding to BMIs between 18.5 and 25. […] Panel members agreed that the risk of heart disease, diabetes, and high blood pressure begins to climb at a BMI of 22 or so. But they didn’t feel justified choosing such a low number as the cutoff between healthy and unhealthy weights, because doing so would have labeled a large majority of the U.S. population as overweight. […] [B]ut many people with a BMI of 23 to 25 are not at their healthiest weight.&lt;/p&gt;

  &lt;p&gt;[…]&lt;/p&gt;

  &lt;p&gt;What about BMIs under 18.5, which the government’s tables say isn’t healthy? This can, indeed, signal an unhealthy weight, especially if an individual has been losing weight or has an eating disorder. But people who have maintained a low BMI while eating healthfully and being active are usually just fine and have no reason to increase their weight.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1 id=&quot;changelog&quot;&gt;Changelog&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;2024-05-18: Add a comment on BMI for transgender people, and fix a numeric error.&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:32&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;It makes sense why the American Heart Association doesn’t care about knowing the exact best BMI: why worry about people with slightly sub-optimal BMIs when a third of Americans have BMIs over 30? From a population-level perspective, getting everyone with a BMI of 24 down to 22 would reduce mortality by approximately zero percent.&lt;/p&gt;

      &lt;p&gt;But my BMI is 24 and I would like to know if I should lower it. &lt;a href=&quot;#fnref:32&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:31&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In the process of writing this post, I went through this cycle five times:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;My research is done! I will do a final read-through to tighten up the prose.&lt;/li&gt;
        &lt;li&gt;(in the middle of reading) Hmmm, this part makes me wonder about something. I will investigate further.&lt;/li&gt;
        &lt;li&gt;(5 hours later) Ok, I think I’ve figured it out. Let me write up these new findings.&lt;/li&gt;
        &lt;li&gt;(post is now 500 words longer) My research is done! I will do a final read-through to tighten up the prose.&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;At no point in my many revisions did the TLDR change, so maybe it was all a waste of time, but at least now I have 14 citations instead of three, which makes me look more scholarly. &lt;a href=&quot;#fnref:31&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;By which I mean I read the introduction and conclusion and looked at the tables and figures. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Honestly I don’t fully understand the reasoning behind this, but both meta-analyses claim that short-term studies are confounded by undiagnosed diseases that decrease BMI and increase mortality.&lt;/p&gt;

      &lt;p&gt;I think the idea is that long-term studies ignore anyone who dies in the first few years, and anyone who dies after that probably didn’t have an undiagnosed illness at the beginning of the study or else they would have died sooner. Many studies use 5-year delays to combat this, but apparently many illnesses go undiagnosed for more than 5 years, and you need at least 15 years to eliminate confounding. In fact, Aune et al. (2016) found that even a 15-year follow-up is noticeably more confounded than a 20-year follow-up—see &lt;a href=&quot;https://www.bmj.com/content/bmj/suppl/2016/05/04/bmj.i2156.DC1/aund030215.ww2.pdf&quot;&gt;Appendix 2&lt;/a&gt;, Table E. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Berrington de Gonzalez A, Hartge P, Cerhan JR, Flint AJ, Hannan L, MacInnis RJ, Moore SC, Tobias GS, Anton-Culver H, Freeman LB, Beeson WL, Clipp SL, English DR, Folsom AR, Freedman DM, Giles G, Hakansson N, Henderson KD, Hoffman-Bolton J, Hoppin JA, Koenig KL, Lee IM, Linet MS, Park Y, Pocobelli G, Schatzkin A, Sesso HD, Weiderpass E, Willcox BJ, Wolk A, Zeleniuch-Jacquotte A, Willett WC, Thun MJ. &lt;a href=&quot;https://www.nejm.org/doi/full/10.1056/NEJMoa1000367&quot;&gt;Body-Mass Index and Mortality among 1.46 Million White Adults.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:3:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I took the four groups from 18.5 to 27.5 (I included the 25–27.5 group because it has similar mortality risk to 18.5–20) and fitted a quadratic curve to them. The curve had a minimum at 23.1.&lt;/p&gt;

      &lt;p&gt;Originally I tried fitting a quadratic curve to eight BMI groups (from 15 BMI up to 40), but the curve wasn’t a good fit. When I fit the eight groups to a quartic curve, it fit well and had a minimum at 22.7, although a fourth-degree polynomial might have too many degrees of freedom.&lt;/p&gt;

      &lt;p&gt;&lt;img src=&quot;/assets/images/BMI-mortality-quartic-fit.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

      &lt;p&gt;(The quadratic curve is the little blue one, and the quartic curve is the big orange one.) &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:28&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;WHO Expert Consultation. (2004). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/14726171/&quot;&gt;Appropriate body-mass index for Asian populations and its implications for policy and intervention strategies.&lt;/a&gt; &lt;a href=&quot;#fnref:28&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:29&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Misra A, Chowbey P, Makkar BM, Vikram NK, Wasir JS, Chadha D, Joshi SR, Sadikot S, Gupta R, Gulati S, Munjal YP. (2009). &lt;a href=&quot;https://www.researchgate.net/profile/Vanisha-Nambiar/publication/367462971_Consensus_Statement_for_Diagnosis_of_Obesity_Abdominal_Obesity_and_the_Metabolic_Syndrome_for_Asian_Indians_and_Recommendations_for_Physical_Activity_Medical_and_Surgical_Management/links/63d35ba6c97bd76a823c820e/Consensus-Statement-for-Diagnosis-of-Obesity-Abdominal-Obesity-and-the-Metabolic-Syndrome-for-Asian-Indians-and-Recommendations-for-Physical-Activity-Medical-and-Surgical-Management.pdf&quot;&gt;Consensus Statement for Diagnosis of Obesity, Abdominal Obesity and the Metabolic Syndrome for Asian Indians and Recommendations for Physical Activity, Medical and Surgical Management.&lt;/a&gt; &lt;a href=&quot;#fnref:29&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bhaskaran K, dos-Santos-Silva I, Leon DA, Douglas IJ, Smeeth L. (2018). &lt;a href=&quot;https://www.thelancet.com/journals/landia/article/PIIS2213-8587(18)30288-2/fulltext&quot;&gt;Association of BMI with overall and cause-specific mortality: a population-based cohort study of 3·6 million adults in the UK&lt;/a&gt; &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:30&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This study had wide confidence intervals on its mortality rates for non-white ethnicities because most subjects were white. And the study only covered UK residents, which could affect the results due to local diet or selection effects from immigration. But it was the only study I could find that looked at all-cause mortality by ethnicity. I found good studies that looked at diabetes morbidity, but no others on all-cause mortality. &lt;a href=&quot;#fnref:30&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Shai I, Jiang R, Manson JE, Stampfer MJ, Willett WC, Colditz GA, Hu FB. (2006). &lt;a href=&quot;https://dash.harvard.edu/bitstream/handle/1/41293007/28918%201585.full.pdf&quot;&gt;Ethnicity, Obesity, and Risk of Type 2 Diabetes in Women.&lt;/a&gt; &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Barry VW, Baruth M, Beets MW, Durstine JL, Liu J, Blair SN. (2013). &lt;a href=&quot;https://g-se.com/uploads/blog_adjuntos/fitness_vs._fatness_on_all_cause_mortality_a_meta_analysis.pdf&quot;&gt;Fitness vs. fatness on all-cause mortality: a meta-analysis.&lt;/a&gt; &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:19:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Abramowitz MK, Hall CB, Amodu A, Sharma D, Androga L, Hawkins M. (2018). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5894968/&quot;&gt;Muscle mass, BMI, and mortality among adults in the United States: A population-based cohort study.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://www.bmj.com/content/370/bmj.m2031&quot;&gt;Zhao et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:6:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;, a cohort study on exercise and mortality,&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt; reports in &lt;a href=&quot;https://www.bmj.com/highwire/markup/1030297/expansion&quot;&gt;Table 2&lt;/a&gt; that exercise has a hazard ratio of 0.48 on all-cause mortality (meaning people who meet the physical activity guidelines&lt;sup id=&quot;fnref:8:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; are only 48% as likely to die as people who don’t). A table of relevant hazard ratios for individuals who meet the exercise guidelines:&lt;/p&gt;

      &lt;table&gt;
        &lt;thead&gt;
          &lt;tr&gt;
            &lt;th&gt; &lt;/th&gt;
            &lt;th&gt;All-cause mortality&lt;/th&gt;
            &lt;th&gt;Lower respiratory disease&lt;/th&gt;
            &lt;th&gt;Flu/pneumonia&lt;/th&gt;
          &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
          &lt;tr&gt;
            &lt;td&gt;hazard ratio&lt;/td&gt;
            &lt;td&gt;0.48&lt;/td&gt;
            &lt;td&gt;0.21&lt;/td&gt;
            &lt;td&gt;0.36&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;hazard ratio, controlling for BMI and other lifestyle factors&lt;/td&gt;
            &lt;td&gt;0.58&lt;/td&gt;
            &lt;td&gt;0.29&lt;/td&gt;
            &lt;td&gt;0.45&lt;/td&gt;
          &lt;/tr&gt;
        &lt;/tbody&gt;
      &lt;/table&gt;

      &lt;p&gt;Whether we control for BMI or not, exercise has a bigger effect on respiratory disease than on all-cause mortality. To calculate how much this matters, we need to know what proportion of people die of respiratory disease.&lt;/p&gt;

      &lt;p&gt;&lt;a href=&quot;https://ourworldindata.org/grapher/annual-number-of-deaths-by-cause?country=~North+America+%28WB%29&quot;&gt;Our World in Data&lt;/a&gt; says that, for North America and Europe, respiratory illnesses account for around 10% of deaths, and they consist of around 2/3 upper respiratory and 1/3 lower respiratory disease. Using the weighted average of the exercise hazard ratios from Zhao et al. (2020) gives an overall hazard ratio for respiratory disease of 0.40.&lt;/p&gt;

      &lt;p&gt;I calculated exercise and BMI hazard ratios for all causes excluding respiratory disease as the solution to &lt;code&gt;all-cause HR = 0.1 * respiratory HR + 0.9 * non-respiratory HR&lt;/code&gt;.&lt;/p&gt;

      &lt;p&gt;Then I adjusted the BMI mortality rates for respiratory disease and for non-respiratory mortality using the respective exercise hazard ratios and re-combined them into an all-cause HR for physically active individuals.&lt;/p&gt;

      &lt;p&gt;I used the exercise HRs controlling for BMI rather than the uncontrolled HRs, on the theory that exercise probably lowers BMI on average, and we want to know the health effect of exercise irrespective of how it changes BMI.&lt;/p&gt;

      &lt;p&gt;It would be more accurate to fully break out mortality by cause instead of just breaking it into respiratory and non-respiratory, but I can’t really do that because Global BMI Mortality Collaboration (2016) and Zhao et al. (2020) don’t classify causes in the same way.&lt;/p&gt;

      &lt;p&gt;This table presents the hazard ratios (HRs) of different BMIs for people who meet the physical activity guidelines, accounting for respiratory disease (normalized so the 22.5–25 group has a HR of 1).&lt;/p&gt;

      &lt;table&gt;
        &lt;thead&gt;
          &lt;tr&gt;
            &lt;th&gt; &lt;/th&gt;
            &lt;th&gt;15–18.5&lt;/th&gt;
            &lt;th&gt;18.5–20&lt;/th&gt;
            &lt;th&gt;20–22.5&lt;/th&gt;
            &lt;th&gt;22.5–25&lt;/th&gt;
            &lt;th&gt;25–27.5&lt;/th&gt;
          &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
          &lt;tr&gt;
            &lt;td&gt;All-cause hazard ratio&lt;/td&gt;
            &lt;td&gt;1.51&lt;/td&gt;
            &lt;td&gt;1.13&lt;/td&gt;
            &lt;td&gt;1.00&lt;/td&gt;
            &lt;td&gt;1.00&lt;/td&gt;
            &lt;td&gt;1.07&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;All-cause HR with exercise&lt;/td&gt;
            &lt;td&gt;1.45&lt;/td&gt;
            &lt;td&gt;1.11&lt;/td&gt;
            &lt;td&gt;0.99&lt;/td&gt;
            &lt;td&gt;1.00&lt;/td&gt;
            &lt;td&gt;1.07&lt;/td&gt;
          &lt;/tr&gt;
        &lt;/tbody&gt;
      &lt;/table&gt;

      &lt;p&gt;Accounting for the effect of exercise on respiratory disease did make lower BMIs look slightly better, but it made hardly any difference. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;ol&gt;
        &lt;li&gt;Berrington de Gonzalez et al. (2010)&lt;sup id=&quot;fnref:3:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; (in its &lt;a href=&quot;https://www.nejm.org/doi/suppl/10.1056/NEJMoa1000367/suppl_file/nejmoa1000367_appendix.pdf&quot;&gt;Supplemental Appendix&lt;/a&gt;, Table 7)&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt; found that physical activity reduced hazard ratios for both low and high BMIs. It also slightly decreased the 20–22.5 BMI hazard ratio from 1.02 to 0.99, which could mean exercise slightly decreases the ideal BMI, but the change in hazard ratio was highly non-significant (p &amp;gt; 0.6) and non-monotonic (the moderate-activity group with 20–22.5 BMI had a hazard ratio of 0.97, lower than either the low-activity or high-activity group).&lt;/li&gt;
        &lt;li&gt;Garfinkel et al. (1988)&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt; wrote that exercise had a greater effect on mortality for both underweight and overweight individuals than for normal-weight people, but the paper did not provide numbers.&lt;/li&gt;
        &lt;li&gt;Lee &amp;amp; Kim (2020)&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt; looked specifically at underweight older adults and found that exercise had a hazard ratio of 0.68. Zhao et al. (2020)&lt;sup id=&quot;fnref:6:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; found an exercise hazard ratio of 0.48 for the general population, but Zhao used a stricter definition of physical activity, so these numbers aren’t directly comparable.&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Zhao M, Veeranki S P, Magnussen C G, Xi B. (2020). &lt;a href=&quot;https://www.bmj.com/content/370/bmj.m2031&quot;&gt;Recommended physical activity and all cause and cause specific mortality in US adults: prospective cohort study.&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:6:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:6:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;To meet the &lt;a href=&quot;(https://www.cdc.gov/physicalactivity/basics/adults/index.htm)&quot;&gt;physical activity guidelines&lt;/a&gt;, you must do both&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;At least one of:
          &lt;ul&gt;
            &lt;li&gt;150 minutes of moderate physical activity per week&lt;/li&gt;
            &lt;li&gt;75 minutes of vigorous physical activity per week&lt;/li&gt;
          &lt;/ul&gt;
        &lt;/li&gt;
        &lt;li&gt;Muscle strengthening activity at least two days per week&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:8:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I cited this cohort study instead of a meta-analysis because I couldn’t find any meta-analyses that looked at exercise and cause-specific mortality, specifically including respiratory disease. The &lt;a href=&quot;https://journals.lww.com/acsm-msse/fulltext/2019/06000/physical_activity,_all_cause_and_cardiovascular.22.aspx&quot;&gt;2018 Physical Activity Guidelines Advisory Committee report&lt;/a&gt; (which is the most comprehensive meta-analysis I could find (it’s actually a meta-meta-analysis)) looked at all-cause mortality and heart disease, but not respiratory disease. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Berrington de Gonzalez et al. (2010) only included white individuals. The mortality &amp;lt;&amp;gt; BMI relationship varies a little bit with race, but I have no reason to believe that race affects the effect of physical activity. &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Garfinkel L, Stellman SD. (1988). &lt;a href=&quot;https://acsjournals.onlinelibrary.wiley.com/doi/abs/10.1002/1097-0142(19881015)62:1+%3C1844::AID-CNCR2820621328%3E3.0.CO;2-O&quot;&gt;Mortality by relative weight and exercise.&lt;/a&gt; &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lee I, Kim B. (2020). &lt;a href=&quot;https://www.ksep-es.org/journal/view.php?number=841&quot;&gt;Association between Estimated Cardiorespiratory Fitness and All-cause Mortality in Underweight Older Adults.&lt;/a&gt; &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Caffeine Cycling Self-Experiment</title>
				<pubDate>Thu, 11 Apr 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/04/11/caffeine_self_experiment/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/04/11/caffeine_self_experiment/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Last updated 2024-07-26 to clarify wording.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;Confidence&lt;/a&gt;: Likely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I conducted an experiment on myself to see if I would develop a tolerance to caffeine from taking it three days a week. The results suggest that I didn’t. Caffeine had just as big an effect at the end of my four-week trial as it did at the beginning.&lt;/p&gt;

&lt;p&gt;This outcome is statistically significant (p = 0.016), but the data show a weird pattern: caffeine’s effectiveness went &lt;em&gt;up&lt;/em&gt; over time instead of staying flat. I don’t know how to explain that, which makes me suspicious of the experiment’s findings.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#experimental-procedure&quot; id=&quot;markdown-toc-experimental-procedure&quot;&gt;Experimental procedure&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#calibration-phase&quot; id=&quot;markdown-toc-calibration-phase&quot;&gt;Calibration phase&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#abstinence-phase&quot; id=&quot;markdown-toc-abstinence-phase&quot;&gt;Abstinence phase&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#experimental-phase&quot; id=&quot;markdown-toc-experimental-phase&quot;&gt;Experimental phase&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-explains-these-results&quot; id=&quot;markdown-toc-what-explains-these-results&quot;&gt;What explains these results?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#an-offer-to-readers&quot; id=&quot;markdown-toc-an-offer-to-readers&quot;&gt;An offer to readers&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;experimental-procedure&quot;&gt;Experimental procedure&lt;/h2&gt;

&lt;p&gt;(I described this procedure in a &lt;a href=&quot;/2024/03/02/caffeine_tolerance/#appendix-b-pre-registration-for-a-caffeine-self-experiment&quot;&gt;pre-registration&lt;/a&gt; on a previous post.)&lt;/p&gt;

&lt;p&gt;I test my reaction time by taking the &lt;a href=&quot;https://humanbenchmark.com/tests/reactiontime&quot;&gt;humanbenchmark.com test&lt;/a&gt; twice in a row. One test consists of 5 reaction events, so this gives a total of 10 reaction events, taking my reaction time at that moment as the average of the 10 reaction times. I take the test using the same computer, monitor, and mouse so that latency is consistent.&lt;/p&gt;

&lt;p&gt;I measure the effect of caffeine using a reaction time test because (a) caffeine is known to improve reaction time, (b) reaction time is easy to test,&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; (c) it’s unlikely to improve with practice so it makes for a good consistent test variable, and (d) it’s hard to placebo-effect myself into improving my reaction time (which is important because I am not blinding myself&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;I conduct three phases as specified below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 1. Calibration phase.&lt;/strong&gt; Continue drinking coffee three days a week as I have been for the past several years: two cups of coffee (~24 ounces) with three scoops of grounds,&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; always the same brand,&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; drunk on Monday, Wednesday, and Friday morning.&lt;/p&gt;

&lt;p&gt;Take a reaction time test twice a day (following the schedule described in phase 3 below). Continue for four weeks. Plot a regression of my reaction time across the four weeks. The purpose of the calibration phase is to ensure that my reaction time does not improve from practicing every day—the regression line should be flat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2. Abstinence phase.&lt;/strong&gt; Abstain from caffeine for one week (9 days total, in between the last Friday of the calibration phase and the first Monday of the test phase). Test reaction time every day and measure the slope of reaction times across the 9 days. If I was habituated to caffeine in phase 1 then my reaction time should improve over the course of phase 2 as my tolerance wears off.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3. Experimental phase.&lt;/strong&gt; Resume drinking coffee three days a week and continue for four weeks. Take a reaction time test twice a day, at (say) 8am and then 10am—the exact time doesn’t matter, but take first test before having coffee and the second 30+ minutes after coffee. (Or on days when I don’t have coffee, take the first test after I wake up and the second test an hour or two later.)&lt;/p&gt;

&lt;h2 id=&quot;calibration-phase&quot;&gt;Calibration phase&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;I wrote the first draft of this section after completing the calibration phase, and I wrote the first draft of &lt;a href=&quot;#abstinence-phase&quot;&gt;Abstinence phase&lt;/a&gt; after completing the abstinence phase. So when I wrote them, I didn’t know the full results of the experiment yet.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I ran a four-week calibration phase to check some assumptions of the experiment:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Caffeine should improve reaction time.&lt;/li&gt;
  &lt;li&gt;If my post-caffeine test outperforms my pre-caffeine test, it should be because of the caffeine, not because my reaction time gets better later in the day.&lt;/li&gt;
  &lt;li&gt;Practicing shouldn’t improve my reaction time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The calibration phase confirmed the first two assumptions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The post-caffeine tests outperformed the pre-caffeine tests by an average of –13 ms (p = 0.025).&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;On no-caffeine days, the second test did not outperform the first test (difference 0.4 ms, p = 0.9).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The third assumption was sort-of-confirmed: my reaction time did not improve over the course of the calibration phase. In fact, it got &lt;em&gt;worse&lt;/em&gt; at a rate of 0.87 ms/day (p = 0.014) for caffeine tests and 1.04 ms/day (p = 0.006) for no-caffeine tests.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-calibration-regression.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(Remember that higher reaction times are worse.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Why did my reaction times get worse? It’s not because I was getting habituated to caffeine. I had already been taking caffeine 3 days a week for years, so I would have been fully habituated long before starting the calibration phase.&lt;/p&gt;

&lt;p&gt;Could it be because I started sleeping worse? That’s part of the reason. I regressed reaction time (on no-caffeine tests) against time spent in bed the previous night. Over the full experiment (not just the calibration phase&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;), each additional hour of sleep improved my reaction time by 4.9 ms&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; (p &amp;lt; 0.0002, r&lt;sup&gt;2&lt;/sup&gt; = 0.24). Controlling for time-in-bed flattened the slope of reaction time across non-caffeine tests from 1.04 ms/day to 0.77 ms/day. But that still leaves almost 3/4 of the slope unexplained.&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-calibration-regression-controlled.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;My best explanation: as the test became part of my routine, I subconsciously started taking it less seriously and started having a harder time staying focused. On most trials I get reaction times around 250–270ms, but occasionally I lose focus and end up taking 330ms or longer to react. As I recall, that didn’t happen at all during the first week or two of the calibration phase, it only started happening later.&lt;/p&gt;

&lt;p&gt;My reaction time can’t continue getting worse forever. But this does raise a concern about the results from the experimental phase: if my performance gets worse during the experimental phase, it might be because I’m getting habituated to caffeine, or it might be a continuation of the trend that happened during the calibration phase.&lt;/p&gt;

&lt;h2 id=&quot;abstinence-phase&quot;&gt;Abstinence phase&lt;/h2&gt;

&lt;p&gt;I abstained from caffeine for 9 days. If I had previously been habituated to caffeine, you’d expect my reaction time to improve over the course of the week as my caffeine withdrawal subsides. Specifically, if caffeine improves reaction time by 13 ms, you’d expect my reaction time to get better by 13 ms over the course of the 9 days (= 1.44 ms/day). Instead, my reaction time got &lt;em&gt;worse&lt;/em&gt; at a rate of 0.77 ms/day. This is not significantly different from 0 (p = 0.4), but it &lt;em&gt;is&lt;/em&gt; significantly different from –1.44 ms/day (p = 0.04).&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-abstinence-regression.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This plot shows the likelihood function for caffeine retention as indicated by the slope of reaction time over the abstinence phase:&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-abstinence-likelihood.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The maximum-likelihood estimate is 1.53—that is, caffeine becomes 53% &lt;em&gt;more&lt;/em&gt; effective after my body adapts to it. If my reaction time got worse during abstinence, that implies caffeine tolerance was making my reaction time better. I’m pretty sure that’s wrong—my reaction time must have gotten worse for some other reason.&lt;/p&gt;

&lt;p&gt;Controlling for time-in-bed flattens the slope to nearly 0:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-abstinence-regression-controlled.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;experimental-phase&quot;&gt;Experimental phase&lt;/h2&gt;

&lt;p&gt;For the experimental phase, I resumed taking caffeine 3 days a week.&lt;/p&gt;

&lt;p&gt;Over the course of the four-week phase, I did not become habituated to caffeine. In fact, I became &lt;em&gt;sensitized&lt;/em&gt;—caffeine got more effective, not less. My post-caffeine reaction time changed at a rate of –0.39 ms/day (p = 0.016) (remember, a negative number means faster reaction time). My reaction time without caffeine also improved to a lesser extent (slope = –0.23 ms/day (p = 0.4); the difference in slopes was not statistically significant (p = 0.5)). So either I did not develop a caffeine tolerance, or any caffeine tolerance I developed was outweighed by some force working in the opposite direction.&lt;/p&gt;

&lt;p&gt;In the plot below, “nocaf” indicates reaction times without having taken caffeine first, and “caf” gives reaction times tested approximately 30 minutes after taking caffeine.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-experimental-regression.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;According to these regression lines, reaction time improved by a total of 5.7 ms with caffeine and 9.8 ms without caffeine over the four weeks.&lt;/p&gt;

&lt;p&gt;You will notice a very low point on day 0. That happened because I accidentally reacted too early on one of the trials, but by pure coincidence I reacted at just the right moment to score a ~20ms reaction time.&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; If I run a regression starting one day later to exclude this anomaly, the slopes for caffeine tests and no-caffeine tests look comparable (caffeine slope = –0.47 ms/day, p = 0.014; no-caffeine slope = –0.54 ms/day, p = 0.027).&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-experimental-regression-skip-day-0.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This plot shows the likelihood function of caffeine retention according to the slope of performance on caffeine tests (excluding day 0):&lt;sup id=&quot;fnref:3:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-experimental-likelihood.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The likelihood function has a mean and a maximum of 1.86, which says caffeine becomes 86% more effective after my body adjusts to it. This likelihood function has only 0.6% of its mass below retention = 1 (i.e., retention = 1 has a p-value of 2 * 0.006 = 0.012). This likelihood function strongly suggests that caffeine gets &lt;em&gt;more&lt;/em&gt; effective over time, not less.&lt;/p&gt;

&lt;p&gt;I don’t believe this result. It’s more likely that some confounding factor caused my reaction time to improve. but I can’t think of what that confounding factor might be.&lt;/p&gt;

&lt;h2 id=&quot;what-explains-these-results&quot;&gt;What explains these results?&lt;/h2&gt;

&lt;p&gt;Could my reaction time have improved because I was getting more sleep? If I control for time spent in bed the previous night, the slope of reaction times vs. days does flatten from –0.54 to –0.40, but this only explains about 1/4 of the slope.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-experimental-regression-controlled.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Maybe my performance improved due to the cumulative effect of sleeping well for many nights in a row? But I spent less time in bed during the experimental phase (average 8.73 hours) than during the calibration phase (8.99 hours), so if anything I should have gotten worse, not better.&lt;/p&gt;

&lt;p&gt;Could this be a genuine result? Could caffeine actually become more effective when I take it for longer? Three &lt;a href=&quot;/2024/03/02/caffeine_tolerance/#experimental-evidence-on-intermittent-dosing&quot;&gt;experiments on rats&lt;/a&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; found something similar: rats who took caffeine daily developed a tolerance, but rats who took caffeine on alternating days became sensitized (its effect got larger). (Plus one study&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; found neither tolerance nor sensitization.)&lt;/p&gt;

&lt;p&gt;This hints that caffeine sensitization is a real thing. But the results from the rat experiments don’t look the same as my results. They found that rats’ performance on caffeine days increased over the course of the experiments while performance on placebo days stayed flat. In contrast, my own performance improved both on caffeine days and on “placebo” days (I didn’t take a placebo, I just took nothing).&lt;/p&gt;

&lt;p&gt;But perhaps caffeine sensitization works differently in humans than in rats. If a habituated caffeine user experiences withdrawal symptoms, then maybe a sensitized user experiences “anti-withdrawal”, making them perform better even when they don’t take caffeine. Maybe my brain thinks, “I don’t know what’s going on, the caffeine levels inside me keep fluctuating, I’d better delete some neurotransmitter receptors just to be safe,” and this ends up making me more alert with or without caffeine. But why didn’t it happen that way in the rat studies?&lt;/p&gt;

&lt;p&gt;Earlier, when I talked about the calibration phase, I hypothesized that my performance got worse because I subconsciously stopped taking the tests as seriously. Could the opposite have happened in the experimental phase?&lt;/p&gt;

&lt;p&gt;I don’t think so. I noticed my performance getting worse when I looked at the results just after finishing the calibration phase. So I might have mentally resolved to focus harder. But if so, you’d expect my performance to jump up and stay persistently high, or perhaps to jump up and then decline again, but not to start low and then steadily improve.&lt;/p&gt;

&lt;p&gt;I thought the results might have something to do with my computer’s latency, but my experiment already controlled most of the parameters that might change the latency (I always tested on the same computer with the same hardware in a browser with a single tab open and with no other applications open except Emacs and Terminal). It occurred to me that perhaps whether my second monitor was on or off might affect the latency, but I tested this and saw no difference.&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;The results of my experiment suggest that I did not become habituated to caffeine. I can’t figure out what they &lt;em&gt;do&lt;/em&gt; suggest, but at least I can say that I probably don’t develop a tolerance from taking caffeine 3 days a week.&lt;/p&gt;

&lt;h2 id=&quot;an-offer-to-readers&quot;&gt;An offer to readers&lt;/h2&gt;

&lt;p&gt;If you conduct a caffeine experiment on yourself with similar methodology to mine, you can send your data to web at mdickens dot me and I’ll analyze it and make some graphs for you. At a minimum, each data point should include the date, your reaction time, and whether you had caffeine.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source code and data for this experiment are available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/tree/master/caffeine&quot;&gt;on GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;And, I didn’t know this before running the experiment, but it turns out that it’s easy to get statistically significant results with reaction time. My reaction time had a day-to-day standard deviation of only 11 ms, so I can detect pretty small effect sizes with just a few days of samples. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;It’s possible to conduct a self-blinded caffeine experiment as follows:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Label caffeine pills and placebo pills as pill A and pill B in a random order.&lt;/li&gt;
        &lt;li&gt;On Monday/Wednesday/Friday, take pill A. On Tuesday/Thursday/Saturday, take pill B. (Skip Sunday.)&lt;/li&gt;
        &lt;li&gt;Each week, re-randomize the ordering of pill A and pill B so you can’t figure out which one is which.&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;I didn’t want to do that for two reasons:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;I already suspect that caffeine makes me feel much better while lifting weights, so I don’t want to spend potentially several weeks lifting weights without caffeine.&lt;/li&gt;
        &lt;li&gt;I prefer to drink coffee rather than take pills, and you can’t really blind coffee because decaf tastes different.&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;A standard serving is two scoops per six ounces, which would require me to use eight scoops, but I don’t like my coffee that strong. If you take higher doses of caffeine, you’ll probably get habituated faster. (I have no evidence that that’s true, but it sounds right to me.) &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Signature Select Classic Roast, because it’s the cheapest and it tastes as good as the best brands I’ve tried. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For this calculation I compared post-caffeine and pre-caffeine tests on the same day, ignoring the test results for days where I didn’t take caffeine. If instead I compare post-caffeine tests vs. all no-caffeine tests (including on days when I don’t take caffeine), the difference between averages is 8 ms. However, the difference in performance without caffeine on caffeine days vs. no-caffeine days is not statistically significant (difference = 3 ms, p = 0.5). I performed slightly worse on caffeine days, which is the opposite of what I’d predict—subjectively, I feel more energetic on caffeine days even when I haven’t taken caffeine yet. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;A regression over just the calibration phase gives a slope of –5.50 ms/hour (p = 0.01337 (nice)). &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;That means a dose of coffee is worth 2 hours of sleep in terms of its immediate effect on reaction time. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I tried regressing reaction time against time-in-bed the previous two nights, but the second night did not add any predictive power.&lt;/p&gt;

      &lt;p&gt;I also looked at time spent asleep according to my &lt;a href=&quot;https://www.sleepcycle.com/&quot;&gt;sleep tracking app&lt;/a&gt;, which I suspected wouldn’t work as well because I’ve noticed it’s pretty bad at identifying when I’m asleep. And indeed, regressing reaction time against time “asleep” gave a similar slope as regressing against time-in-bed, but with a worse p-value. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I converted slope into retention as follows:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;multiply slope by the number of days in the phase to get the total reaction time change&lt;/li&gt;
        &lt;li&gt;divide by the baseline benefit of caffeine (13 ms) to get the degree of habituation (0 = no habituation, 1 = full habituation, -1 = reverse habituation i.e. caffeine got more effective)&lt;/li&gt;
        &lt;li&gt;subtract from 1 to get retention (retention is basically the inverse of habituation)&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;Unlike in my caffeine &lt;a href=&quot;/2024/03/29/does_caffeine_stop_working/&quot;&gt;literature review&lt;/a&gt;, I treated the baseline benefit as a fixed parameter instead of a distribution because that makes the math easier (but the real reason is that I wrote this part of the code before I wrote the code for the literature review). &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:3:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Perhaps I should have re-run the trial, but I was following a strict rule not to re-run trials under any circumstances, to make sure I had no wiggle room to bias the results. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The displayed graph shows no-caffeine trials. Controlling for sleep on caffeine trials has a similar effect, flattening the slope from –0.47 to –0.41. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;C. J. Meliska, R. E. Landrum &amp;amp; T. A. Landrum (1990). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/2320659/&quot;&gt;Tolerance and sensitization to chronic and subchronic oral caffeine: effects on wheelrunning in rats.&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;Omar Cauli, Annalisa Pinna, Valentina Valentini &amp;amp; Micaela Morelli (2003). &lt;a href=&quot;https://www.nature.com/articles/1300240&quot;&gt;Subchronic Caffeine Exposure Induces Sensitization to Caffeine and Cross-Sensitization to Amphetamine Ipsilateral Turning Behavior Independent from Dopamine Release.&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;N. Simola, E. Tronci, A. Pinna &amp;amp; M. Morelli (2006). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/16874467/&quot;&gt;Subchronic-intermittent caffeine amplifies the motor effects of amphetamine in rats.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Omar Cauli &amp;amp; Micaela Morelli (2002). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/12122482/&quot;&gt;Subchronic caffeine administration sensitizes rats to the motor-activating effects of dopamine D(1) and D(2) receptor agonists.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>How Well Did Scott Alexander's List of Social Science Findings Hold Up?</title>
				<pubDate>Mon, 08 Apr 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/04/08/did_social_science_findings_hold_up/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/04/08/did_social_science_findings_hold_up/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;In 2012, Scott Alexander &lt;a href=&quot;https://web.archive.org/web/20131230022325/http://squid314.livejournal.com/322213.html&quot;&gt;defended social sciences&lt;/a&gt; against the claim that they can’t figure anything out. He gave a long list of well-established findings across a variety of social science disciplines.&lt;/p&gt;

&lt;p&gt;12 years later, how well did that list hold up?&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;I evaluated the list off the top of my head without doing any research,&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; so please don’t take this too seriously.&lt;/p&gt;

&lt;p&gt;The text before the colon on each numbered item is Scott’s words; everything else is my words.&lt;/p&gt;

&lt;h3 id=&quot;anthropology&quot;&gt;Anthropology&lt;/h3&gt;

&lt;ol&gt;
  &lt;li&gt;Humankind evolved in Africa, gradually settled the Old World, and crossed the Bering land bridge to America around 20,000 years ago: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Languages form large families like Indo-European that can be used to trace the history and development of different peoples: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;People have an almost-miraculous language instinct that can for example turns a pidgin into a creole in the second generation: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;There are various human universals, but people tend to overestimate how universal their own culture’s norms are: &lt;strong&gt;I don’t know&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;I don’t know how to interpret this claim. It’s well-established that there are various human universals and also that cultural norms vary a lot, but I have no idea whether it’s true that people tend to overestimate how universal their own culture’s norms are.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Any biological mental differences between groups are less important than previous believed and quickly overwhelmed by within-group differences: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Verdict: 4/5 or 5/5. Anthropology is doing well.&lt;/p&gt;

&lt;h3 id=&quot;economics&quot;&gt;Economics&lt;/h3&gt;

&lt;ol&gt;
  &lt;li&gt;Prices in the marketplace are determined by supply and demand: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Capitalism leads to faster economic growth than the alternatives: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Unless you have very strange priorities, free trade is a good idea: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;The gold standard is a bad idea: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Rent control decreases the quality and quantity of available housing: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Minimum wages increase unemployment: &lt;strong&gt;still good&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;There are some observational studies contradicting this and there’s debate as to whether minimum wage is currently high enough that raising it would cause noticeable unemployment but the basic principle of this claim is still true. (I found Bryan Caplan’s &lt;a href=&quot;https://www.econlib.org/archives/2013/03/the_vice_of_sel.html&quot;&gt;The Myopic Empiricism of the Minimum Wage&lt;/a&gt; persuasive.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;) And anyway, the counter-arguments people would raise today are the same ones they would’ve raised in 2012.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Cutting taxes will not increase government revenue in normal conditions: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Verdict: 7/7. Economics is doing well.&lt;/p&gt;

&lt;h3 id=&quot;psychology&quot;&gt;Psychology&lt;/h3&gt;

&lt;ol&gt;
  &lt;li&gt;Personality is about 50% biologically determined: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;People’s self-image is constructed on the spot and varies widely depending on the situation: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;People exhibit various cognitive biases that deviate systematically from rational thought: &lt;strong&gt;partially replicated&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;Some cognitive biases have replicated (base rate neglect, status quo bias, loss aversion, hindsight bias), others haven’t (priming, nudge theory, implicit bias). From scanning thru Wikipedia’s &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_cognitive_biases&quot;&gt;list of cognitive biases&lt;/a&gt;, looks like 70–90% have replicated. I will give this one half credit for a partial replication.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;IQ correlates to all kinds of important life outcomes: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Many mental disorders correspond to disordered brain chemistry and can be partly treated chemically: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;People lack privileged access to their own mental processes: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Cognitive behavioral therapy does better than placebo in treating mental disorders; Freudian therapy does not. &lt;strong&gt;still good (?)&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;IIRC some RCTs have found good benefits to the modern version of Freudian therapy but I don’t think it’s well-established.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Babies are not a blank slate but have various built-in behavioral patterns; they develop new mental abilities in an orderly fashion: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Animals react to reward and punishment in extremely predictable, almost mathematical ways: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Strong relationships and driving purpose are very important to happiness; material goods less so after a certain point: &lt;strong&gt;unclear&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;As I understand it, the happiness research does consistently support this claim, but it’s not clear that we are doing a good job of measuring happiness.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Verdict: 8.5/10 or 9.5/10. Psychology is doing fairly well. I found this one surprising considering how much stuff in psychology has failed to replicate, but Scott did a good job of identifying the claims that would hold up (a much better job than my college psychology textbook did).&lt;/p&gt;

&lt;h3 id=&quot;sociology&quot;&gt;Sociology&lt;/h3&gt;

&lt;ol&gt;
  &lt;li&gt;People are racist as heck: &lt;strong&gt;failed to replicate&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;I believe Scott is talking about Implicit Association Tests, which do consistently show implicit associations but don’t reliably predict behavior. Some other things like reume name bias failed to replicate. There’s some version of this hypothesis (the “all bad racial outcomes are caused by racism” hypothesis) that’s basically unfalsifiable and therefore hasn’t been disproven but that’s not a point in its favor.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;No, really, they’re really racist: &lt;strong&gt;failed to replicate&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Even the ones who don’t think they are: &lt;strong&gt;failed to replicate&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Even the ones who swear up and down that they’re not racist and donate to the NAACP: &lt;strong&gt;failed to replicate&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;There are major disparities in the income levels of social status of various ethnic groups: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Discrimination explains a lot of this, sometimes in surprising ways: &lt;strong&gt;was never established in the first place&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;I don’t entirely understand what Scott meant by this but as far as I know, it was never established and the evidence almost entirely contradicts it.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Most social problems are closely correlated with one another. Scandinavia has the fewest social problems of developed countries, and the US has the most: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Poor people and uneducated people tend to suffer more social problems and commit more crimes: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Religious people tend to be happier and better-adjusted than others: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Social class is a big deal, even in societies that make a big pretense of being classless: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Verdict: 5/10. Sociology are you ok? (To be fair, this list is a bit skewed because Scott spent four bullet points reiterating the same claim that failed to replicate.)&lt;/p&gt;

&lt;h3 id=&quot;epidemiology&quot;&gt;Epidemiology&lt;/h3&gt;

&lt;ol&gt;
  &lt;li&gt;Smoking causes cancer and many other problems: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Alcohol causes liver disease and many other problems: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Bad diets (in some sense of the word) cause heart disease, Type II diabetes, and many other problems: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Two zillion other correlations between risk factors and diseases: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Exercise is really good for you: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Vaccines are extremely effective at controlling infectious diseases: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;And they don’t cause autism: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Stress causes or exacerbates many diseases: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Poor people suffer from more diseases, even in ways that are not directly linked to them not being able to afford medical care: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Many diseases seem to be part genetic and part “other factors”, including mental diseases: &lt;strong&gt;still good&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Lifestyle changes can decrease your chance of getting mental diseases: &lt;strong&gt;still good(ish)&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;As I understand, this is broadly supported but there’s some contradictory evidence on e.g. whether exercise helps with depression.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Multivitamins don’t work: &lt;strong&gt;mixed evidence&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;There are some RCTs on both sides (for a quick summary, see &lt;a href=&quot;https://www.redpenreviews.org/reviews/eat-drink-and-be-healthy/&quot;&gt;here&lt;/a&gt; under the heading “Most unusual claim”). Multivitamins are cheap so IMO they pass a &lt;a href=&quot;https://slatestarcodex.com/2020/04/14/a-failure-but-not-of-prediction/&quot;&gt;cost-benefit analysis&lt;/a&gt;. I am giving zero credit on this one because, while it might be true, it’s not &lt;em&gt;definitely&lt;/em&gt; true, and this was presented as a list of definitely-true findings.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Low-dose aspirin (probably) helps prevent cancer: &lt;strong&gt;still good&lt;/strong&gt;
    &lt;ul&gt;
      &lt;li&gt;The evidence on this is weak but Scott did say “probably” so I’ll give this one full credit.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Verdict: 11/13 or 12/13. Epidemiology is doing pretty well.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Except for a few of the claims in epidemiology which I didn’t know anything about, so I did about 15 seconds of research. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Prior to reading Caplan’s article, my position on minimum wage was basically, “Demand curves are almost always downward sloping and the empirical evidence on minimum wage isn’t good enough to overcome this strong prior.” Which is the first argument Caplan makes, but he also make some other good arguments that I hadn’t thought of. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Explicit Bayesian Reasoning: Don't Give Up So Easily</title>
				<pubDate>Wed, 03 Apr 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/04/03/explicit_bayesian_reasoning_dont_give_up/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/04/03/explicit_bayesian_reasoning_dont_give_up/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Recently, Saar Wilf, creator of &lt;a href=&quot;https://www.rootclaim.com/&quot;&gt;Rootclaim&lt;/a&gt;, had a high-profile debate against Peter Miller on whether COVID originated from a lab. Peter won and Saar lost.&lt;/p&gt;

&lt;p&gt;Rootclaim’s mission is to “overcome the flaws of human reasoning with our probabilistic inference methodology.” Rootclaim assigns odds to each piece of evidence and perfoms Bayesian updates to get a posterior probability. When Saar lost the lab leak debate, some people considered this a defeat not just for the lab leak hypothesis, but for Rootclaim’s whole approach.&lt;/p&gt;

&lt;p&gt;In Scott Alexander’s coverage of the debate, he &lt;a href=&quot;https://www.astralcodexten.com/p/practically-a-book-review-rootclaim&quot;&gt;wrote&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;While everyone else tries “pop Bayesianism” and “Bayes-inspired toolboxes”, Rootclaim asks: what if you just directly apply Bayes to the world’s hardest problems? There’s something pure about that, in a way nobody else is trying.&lt;/p&gt;

  &lt;p&gt;Unfortunately, the reason nobody else is trying this is because it doesn’t work. There’s too much evidence, and it’s too hard to figure out how to quantify it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Don’t give up so easily! We as a society have spent approximately 0% of our collective decision-making resources on explicit Bayesian reasoning. Just because Rootclaim used Bayesian methods and then lost a debate doesn’t mean those methods will never work. That would be like saying, “randomized controlled trials were a great idea, but they &lt;a href=&quot;https://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/&quot;&gt;keep finding that ESP exists&lt;/a&gt;. Oh well, I guess we should give up on RCTs and just form beliefs using common sense.”&lt;/p&gt;

&lt;p&gt;(And it’s not even like the problems with RCTs were easy to fix. &lt;a href=&quot;https://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/&quot;&gt;Scott wrote&lt;/a&gt; about 10 known problems with RCTs and 10 ways to fix them, and then wrote about an RCT that fixed all 10&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; of those problems and &lt;em&gt;still&lt;/em&gt; found that ESP exists. If we’re going to give RCTs more than 10 tries, we should extend the same courtesy to Bayesian reasoning.)&lt;/p&gt;

&lt;p&gt;I’m optimistic that we can make explicit Bayesian analysis work better. And I can already think of ways to improve on two problems with it.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;strong&gt;First problem:&lt;/strong&gt; If you multiply a long list of probabilities as if they’re independent when they’re not, you get an extreme result.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick fix:&lt;/strong&gt; Reduce the magnitudes of the odds updates based on how much evidence you already have. The more individual factors you have, the more a new factor can be explained in terms of existing factors.&lt;/p&gt;

&lt;p&gt;For example, you could scale down the log-odds of your second observation by 1/2, your third observation by 1/3, your fourth observation by 1/4, etc. This roughly captures the intuitions that&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;if you have a lot of evidence already, a new observation is probably mostly predicted by the existing evidence&lt;/li&gt;
  &lt;li&gt;if you have infinitely many pieces of evidence, that should give you an infinitely large odds update&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach means you don’t need to spend any time thinking about how correlated your inputs are.&lt;/p&gt;

&lt;p&gt;If you have lines of evidence A, B, C, etc., the formula for joint log-odds becomes&lt;/p&gt;

&lt;p&gt;\begin{align}
\log(A) + \frac{1}{2} \log(B) + \frac{1}{3} \log(C) + …
\end{align}&lt;/p&gt;

&lt;p&gt;And therefore your joint odds would be&lt;/p&gt;

&lt;p&gt;\begin{align}
A \cdot B^{1/2} \cdot C^{1/3} \cdot …
\end{align}&lt;/p&gt;

&lt;p&gt;I don’t have a rigorous justification for this formula&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; and it has some obvious problems (for example, if you change the order or your inputs, the answer changes). But it has some advantages over treating every piece of evidence as independent.&lt;/p&gt;

&lt;p&gt;As a proof of concept, I &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1ASFyXUF6u6move_QMFW6yd-3O1mA9TiS/&quot;&gt;created a modified version&lt;/a&gt; of Scott Alexander’s &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1Tm9ajpJudn-gjsZ1QsMoeYzY94JWEzzS/&quot;&gt;lab leak debate calculator&lt;/a&gt; that updates less on correlated evidence. My version assumes two lines of evidence are correlated if (1) they’re under the same heading and (2) they point in the same direction. This change reduces the standard deviation of people’s answers from 7.4 orders of magnitude to 4.4. (Or, if you exclude Peter Miller’s extremely-overconfident answer, it reduces the standard deviation from 2.1 OOM to 1.8.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second problem:&lt;/strong&gt; Overconfident probabilities like “1 in 10,000 chance that COVID would first appear in a wet market conditional on lab leak”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick fix:&lt;/strong&gt; Give every piece of evidence a “reliability score”. Maybe the evidence looks like it suggests 10,000:1 odds but you haven’t thought about it that hard. You read the number in some population survey but maybe the survey mis-calculated, maybe it used bad data collection methods, maybe you misread the number of zeros and it actually said 1 in 1000.&lt;/p&gt;

&lt;p&gt;As a simple approach, you could give every piece of evidence a reliability score from 1 (low reliability) to 4 (high reliability). Discount evidence by raising it to the power of &lt;code&gt;1 / (5 - reliability_score)&lt;/code&gt;. So 10,000:1 evidence with a reliability score of 2 gets reduced to 10,000&lt;sup&gt;1/2&lt;/sup&gt; = 100:1, and evidence with a score of 1 gets reduced to 10,000&amp;lt;/sup&amp;gt;1/4&amp;lt;/sup&amp;gt; = 10:1.&lt;/p&gt;

&lt;p&gt;Is that the best way to handle the problem of overconfident odds updates? Probably not. But it’s really easy and it took me three seconds to come up with.&lt;/p&gt;

&lt;p&gt;(If you think carefully enough about your odds, you don’t need a reliability score. But the score is a convenient way to encode a concept like “I did some calculations and got 10,000:1 odds but I haven’t carefully checked the calculations.”)&lt;/p&gt;

&lt;p&gt;Quoting Scott again,&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;In the end, I think Saar has two options:&lt;/p&gt;

  &lt;ol&gt;
    &lt;li&gt;
      &lt;p&gt;Abandon the Rootclaim methodology, and go back to normal boring impure reasoning like the rest of us, where you vaguely gesture at Bayesian math but certainly don’t try anything as extreme as actually using it.&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
      &lt;p&gt;Claim that he, Saar, through his years of experience testing Rootclaim, has some kind of special metis at using it, and everyone else is screwing up.&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;(I get the sense Scott is joking, but I’ve heard other people say things like this.)&lt;/p&gt;

&lt;p&gt;I propose a third option: Examine the flaws in explicit Bayesian reasoning and look for ways to fix them.&lt;/p&gt;

&lt;p&gt;Or a fourth option: Do explicit Bayesian reasoning, don’t take the result literally but implicitly update your beliefs based on the result.&lt;/p&gt;

&lt;p&gt;Or a fifth option: Figure out how to fix RCTs, and then do something similar for Bayesian reasoning. (Did we figure out how to fix RCTs yet?)&lt;/p&gt;

&lt;p&gt;Or a sixth option: Keep doing Bayesian reasoning, and meanwhile keep trying to fix its flaws (like we are sort-of doing for RCTs).&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Actually only 8 out of 10 but the basic point still stands. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;You could slightly-more-rigorously justify this formula by saying&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;The variance in evidence B is 50% explained by evidence A.&lt;/li&gt;
        &lt;li&gt;The variance in evidence C is 50% explained by A and 50% explained by B.&lt;/li&gt;
        &lt;li&gt;But the parts of C that are explained by A and B heavily overlap, so less than 75% of C is explained by A plus B.&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Does Caffeine Stop Working?</title>
				<pubDate>Fri, 29 Mar 2024 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2024/03/29/does_caffeine_stop_working/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/03/29/does_caffeine_stop_working/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Last updated 2024-09-02.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;Confidence&lt;/a&gt;: Likely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you take caffeine every day, does it stop working? If it keeps working, how much of its effect does it retain?&lt;/p&gt;

&lt;p&gt;There are many studies on this question, but most of them have severe methodological limitations. I read all the good studies (on humans) I could find. Here’s my interpretation of the literature:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Caffeine almost certainly loses some but not all of its effect when you take it every day.&lt;/li&gt;
  &lt;li&gt;In expectation, caffeine retains 1/2 of its benefit, but this figure has a wide credence interval.&lt;/li&gt;
  &lt;li&gt;The studies on cognitive benefits all have some methodological issues so they might not generalize.&lt;/li&gt;
  &lt;li&gt;There are two studies on exercise benefits with strong methodology, but they have small sample sizes.&lt;/li&gt;
&lt;/ul&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;clarifying-terminology&quot;&gt;Clarifying terminology&lt;/h2&gt;

&lt;p&gt;The scientific literature talks about the “caffeine withdrawal hypothesis.” People use this term to describe two very different hypotheses:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Caffeine has no benefits for &lt;em&gt;anyone&lt;/em&gt;. It reverses withdrawal symptoms for habituated users, but it doesn’t do anything for non-users. (Call this the “caffeine-is-useless hypothesis.”)&lt;/li&gt;
  &lt;li&gt;Caffeine initially has benefits for non-users, but if you use caffeine habitually, your body adjusts to the point where you need caffeine just to get back up to baseline. (Call this the “caffeine habituation hypothesis.”)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;According to the caffeine-is-useless hypothesis, on the first day you take caffeine, you experience no benefits, and then you start feeling withdrawal symptoms after you’ve been taking caffeine for a few days:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-useless-hypothesis.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;According to the caffeine habituation hypothesis, caffeine has initial benefits, but you start developing a tolerance:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-habituation-hypothesis.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(Graphs inspired by Gavin Leech’s &lt;a href=&quot;https://www.gleech.org/stims&quot;&gt;article&lt;/a&gt; on caffeine, except I put way less effort into mine.)&lt;/p&gt;

&lt;p&gt;Most studies on caffeine withdrawal only look at the caffeine-is-useless hypothesis. The study results pretty much universally reject this hypothesis so I consider it falsified&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;. I’m much more interested in the caffeine habituation hypothesis, so that’s the one I will be discussing.&lt;/p&gt;

&lt;p&gt;Research papers almost never explicitly discuss the caffeine habituation hypothesis, but some of them provide enough data to test it. In the next two sections, I will review some studies and figure out what their data tell us about the caffeine habituation hypothesis.&lt;/p&gt;

&lt;h2 id=&quot;cognition-studies&quot;&gt;Cognition studies&lt;/h2&gt;

&lt;p&gt;I found two good studies on the exercise benefits of caffeine, and four good(ish) studies on the cognitive benefits. Let’s start with the cognition studies: &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/23108937/&quot;&gt;Rogers et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/17514640/&quot;&gt;Hewlett &amp;amp; Smith (2007)&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;, &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/15678363/&quot;&gt;Haskell et al. (2005)&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;, and &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/16910172/&quot;&gt;Smith et al. (2006)&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;These studies all used similar methodology: they divided participants into high caffeine users vs. low/non-users. Then they randomly administered either caffeine or placebo and tested participants’ performance on various cognitive tests.&lt;/p&gt;

&lt;p&gt;The studies each have four groups of participants:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;LoCaf: low- or non-caffeine users assigned to take caffeine&lt;/li&gt;
  &lt;li&gt;LoPla: low- or non-caffeine users assigned to take placebo&lt;/li&gt;
  &lt;li&gt;HiCaf: high-caffeine users assigned to take caffeine&lt;/li&gt;
  &lt;li&gt;HiPla: high-caffeine users assigned to take placebo&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To test the caffeine habituation hypothesis, compare the performance of high-caffeine users after taking caffeine (HiCaf) versus low-caffeine users after taking placebo (LoPla). If high users develop complete tolerance then these two groups should perform the same: when a habitual user takes caffeine, it brings their performance back up to baseline, but has no benefits beyond that.&lt;/p&gt;

&lt;p&gt;(This methodology isn’t perfect because low and high caffeine users might differ in ways that could bias the results.&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; (For more discussion of this possibility, see &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/23108937/&quot;&gt;Rogers et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:2:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;.) It would be better to randomize participants to take either caffeine or placebo for several weeks.)&lt;/p&gt;

&lt;p&gt;Then calculate:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;baseline benefit&lt;/strong&gt; = LoCaf – LoPla = benefit to a naive user&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;habituated benefit&lt;/strong&gt; = HiCaf – LoPla = benefit to a habituated user&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;retention&lt;/strong&gt; = habituated benefit / baseline benefit = (HiCaf – LoPla) / (LoCaf – LoPla) = what proportion of caffeine’s benefits are retained by a habituated user&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Retention ranges from 0 (caffeine loses all its effect) to 1 (caffeine retains all its effect).&lt;/p&gt;

&lt;p&gt;I evaluated the effectiveness of caffeine by computing approximate &lt;a href=&quot;https://en.wikipedia.org/wiki/Likelihood_function&quot;&gt;likelihood functions&lt;/a&gt; of retention from each study’s measurements.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(Disclaimer: I only just now learned how likelihood functions worked so I could write this article. Be wary of mistakes.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The likelihood function L(X) answers the question, “If caffeine’s true retention is X, what is the probability that we would measure the retention that we did in fact measure?” If a particular retention has a high likelihood of generating the results we got, that makes it more likely that that’s the true retention.&lt;/p&gt;

&lt;p&gt;(For a longer explanation of likelihood functions, see &lt;a href=&quot;https://arbital.greaterwrong.com/p/likelihood_not_pvalue_faq&quot;&gt;Report Likelihoods, Not P-Values&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;Plotting likelihood functions of the four main metrics&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; from Rogers et al. (2013) (which had a larger sample size than the other three studies combined) along with the average likelihood&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; of those metrics:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-likelihood-Rogers.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(The dots on each curve show the mean likelihoods. You can interpret the mean as the value the evidence tends to point toward.&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;This graph says the experiment points toward caffeine retaining around half its effect (0.56 to be precise). And it says we’d be somewhat unlikely to see these experimental results if retention = 0, but not unlikely to see them if retention = 1.&lt;/p&gt;

&lt;p&gt;Here’s how I computed these approximate likelihood functions:&lt;/p&gt;

&lt;p&gt;Model the baseline benefit (LoCaf - LoPla) and habituated benefit (HiCaf - LoPla) as t-distributions.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; To compute the likelihood function for retention (habituated benefit / baseline benefit), we need to know the shape of a ratio of t-distributions.&lt;/p&gt;

&lt;p&gt;A ratio of t-distributions does not have a closed form solution. So I approximated the solution using a formula I pulled off &lt;a href=&quot;https://en.wikipedia.org/wiki/Ratio_distribution#Uncorrelated_noncentral_normal_ratio&quot;&gt;Wikipedia&lt;/a&gt; for the ratio of two independent normal distributions (astute readers will notice that the distributions in question are neither independent nor normal). I guess you could call this the Simon-Ftorek approximation because Wikipedia cites Simon and Ftorek (2022)&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;. For more on whether this approximation is any good, see &lt;a href=&quot;#appendix-a-approximating-the-retention-ratio&quot;&gt;Appendix A&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I used the Simon-Ftorek approximation to compute the likelihood functions for the most important metrics in each study (see &lt;a href=&quot;#appendix-b-list-of-all-metrics-used&quot;&gt;Appendix B&lt;/a&gt; for a list of all the metrics I used). Then I computed a likelihood function for each study as the (geometric) average of all the likelihood functions for individual metrics in that study. Normally, you’re supposed to compute a joint likelihood as the product of your likelihood functions. But that assumes each function provides independent evidence, and I figured if you have the same study with the same participants, the different metrics all basically represent the same evidence. So I averaged them instead of multiplying them.&lt;/p&gt;

&lt;p&gt;If we take the joint likelihoods from the four cognition studies and combine them into one big joint likelihood, we get this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-likelihood-joint-cognition.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;exercise-studies&quot;&gt;Exercise studies&lt;/h2&gt;

&lt;p&gt;I found two good studies on how habitual caffeine affects exercise: &lt;a href=&quot;https://doi.org/10.1080/02640414.2016.1241421&quot;&gt;Beaumont et al. (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6343867/&quot;&gt;Lara et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;. These two studies had participants abstain from caffeine for a month. Then they gave participants either caffeine or placebo for several weeks, testing their exercise performance at the beginning and end of the study.&lt;/p&gt;

&lt;p&gt;The methodologies differed somewhat, and I calculated retention accordingly.&lt;/p&gt;

&lt;p&gt;In Beaumont et al. (2017)&lt;sup id=&quot;fnref:3:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;, the participants took two pre-tests, one with caffeine and one with placebo. Then after taking caffeine every day for 28 days, they took a dose of caffeine and a final performance test. I calculated&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;baseline benefit = caffeine pre-test – placebo pre-test&lt;/li&gt;
  &lt;li&gt;habituated benefit = caffeine post-test – placebo pre-test&lt;/li&gt;
  &lt;li&gt;retention = habituated benefit / baseline benefit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Lara et al. (2019)&lt;sup id=&quot;fnref:11:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; tested participants 3 times a week for 20 days. To take advantage of all the extra data points, I plotted a linear regression over the performance for the caffeine group minus the placebo group&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;. Then I calculated&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;baseline benefit = projected effect size on day 0 (= intercept of the regression)&lt;/li&gt;
  &lt;li&gt;habituated benefit = projected effect size on day 20&lt;/li&gt;
  &lt;li&gt;retention = habituated benefit / baseline benefit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-likelihood-joint-exercise.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;It looks like caffeine retains a little under 1/2 its benefit for exercise.&lt;/p&gt;

&lt;h2 id=&quot;some-problems-with-my-approach&quot;&gt;Some problems with my approach&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;As I mentioned before, I couldn’t calculate exact likelihood functions, I could only approximate them.&lt;/li&gt;
  &lt;li&gt;I assumed caffeine has the same effect on all cognitive tests and on all exercise tests. But caffeine probably helps more with some tasks than others. It probably improves reaction time more than memory; it probably helps more with sustained moderate exercise (e.g., a &lt;a href=&quot;https://en.wikipedia.org/wiki/VO2_max#Measurement_and_calculation&quot;&gt;VO2 max test&lt;/a&gt;) than with short intense efforts (e.g., a &lt;a href=&quot;https://en.wikipedia.org/wiki/Wingate_test&quot;&gt;Wingate test&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Limiting the domain to [0, 1] makes the likelihood means tend toward 0.5 because it chops off the (often large) parts of the distribution below 0 and above 1. If you think retention can go above 1 but not below 0, you’ll get a higher mean. Conversely, if you think it can go below 0 but not above 1, you’ll get a lower mean. (Symmetrically expanding the domain to [-1, 2] doesn’t change the answers much.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;I plotted likelihood functions for every I study I reviewed and combined them into one big joint likelihood:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-likelihood-joint-all-metrics.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;What about a posterior probability?&lt;/p&gt;

&lt;p&gt;If you use a uniform prior for caffeine retention, the posterior probability distribution simply equals the likelihood function. According to the studies I looked at, a habituated caffeine user retains an expected 49% of the cognitive benefit and 44% of the exercise benefit, or 48% if we combine the cognition and exercise studies.&lt;/p&gt;

&lt;p&gt;My prior has more probability mass near 0 than 1. Human bodies &lt;a href=&quot;https://slatestarcodex.com/2019/08/19/maybe-your-zoloft-stopped-working-because-a-liver-fluke-tried-to-turn-your-nth-great-grandmother-into-a-zombie/&quot;&gt;want to maintain homeostasis&lt;/a&gt;, so there’s some theoretical reason to expect your body to adjust until caffeine stops working entirely. (And, empirically, taking caffeine does cause your brain to grow more neurotransmitter receptors,&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt; although it’s not clear how this corresponds to cognitive function.) Changing the prior moves my posterior expected retention from 48% to 43% or so.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;As shown in &lt;a href=&quot;#appendix-a-approximating-the-retention-ratio&quot;&gt;Appendix A&lt;/a&gt;, my approximation for the likelihood function understates the mean. Plus my review excluded any metrics where caffeine had too small a baseline benefit, which biases the retention to look smaller. Due to these factors, the true implied retention is higher than 43%. Let’s call it 50% to make it a nice round number.&lt;/p&gt;

&lt;p&gt;So we can reasonably say, albeit with a high degree of uncertainty, that caffeine retains something like half its benefit for a habitual user.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source code for my calculations can be found &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/tree/master/caffeine&quot;&gt;on GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;appendix-a-approximating-the-retention-ratio&quot;&gt;Appendix A: Approximating the retention ratio&lt;/h2&gt;

&lt;p&gt;As a sanity check, I generated 10,000 random samples following the same distribution as the groups in one of the caffeine studies (Rogers et al. (2013)&lt;sup id=&quot;fnref:2:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;), where each sample represents a set of true parameter values that could correspond to the observed values for the four groups (LoCaf, LoPla, HiCaf, HiPla). Then I calculated the distribution of true retention and the Simon-Ftorek approximation of the likelihood function (normalized to integrate to 1) and plotted them together:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-MC-vs-SF-wide.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;A zoomed-in plot showing just the values from 0 to 1:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-MC-vs-SF-narrow.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(The upper plot already chops off a big tail. The largest retention value in the sample was over 500, i.e., a long-term caffeine user appears to get a 500x larger benefit per dose than a naive user. That can happen with a ratio of random variables when the denominator ends up close to 0.)&lt;/p&gt;

&lt;p&gt;From these plots we can see that a Simon-Ftorek approximation slightly overstates the width of the distribution.&lt;/p&gt;

&lt;p&gt;If we restrict the Monte Carlo sample to just the results with a baseline benefit of least one standard error (as I did when selecting metrics to look at), we get this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-MC-vs-SF-1SE.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I also tried two other approximations, but Simon-Ftorek seemed best. The other approximations I tried:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Approximate using a Cauchy distribution. A Cauchy distribution perfectly represents the ratio of two central normal distributions, so I figured it might be an okay approximation for the ratio of non-central normal distributions.&lt;/li&gt;
  &lt;li&gt;Numerically compute the ratio of two distributions represented as histograms. This still requires assuming the distributions are independent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This plot shows the three different approximations over a single metric from Beaumont et al. (2017)&lt;sup id=&quot;fnref:3:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caf-likelihood-approximations.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;appendix-b-list-of-all-metrics-used&quot;&gt;Appendix B: List of all metrics used&lt;/h2&gt;

&lt;p&gt;I did not use metrics from every study. I used the following criteria to select metrics:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Only use performance metrics, not subjective ratings or physiological measurements. (That means no sleepiness, no heart rate, etc.).&lt;/li&gt;
  &lt;li&gt;Prefer metrics that the study authors emphasized.&lt;/li&gt;
  &lt;li&gt;Don’t use any metrics where the baseline benefit was small (&amp;lt;1 standard error) or negative.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used the following metrics from each study:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Rogers et al. (2013)&lt;sup id=&quot;fnref:2:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;: simple reaction time, choice reaction time, recognition memory, tapping speed&lt;/li&gt;
  &lt;li&gt;Hewlett &amp;amp; Smith (2007)&lt;sup id=&quot;fnref:6:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;: focused attention speed, simple reaction time, verbal reasoning % correct&lt;/li&gt;
  &lt;li&gt;Haskell et al. (2005)&lt;sup id=&quot;fnref:9:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;: simple reaction time, digit vigilance reaction time, Rapid Visual Information Processing false alarms, spatial memory (sensitivity index), numeric memory reaction time&lt;/li&gt;
  &lt;li&gt;Smith et al. (2006)&lt;sup id=&quot;fnref:10:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;: focused attention speed, categoric search reaction time, simple reaction time, vigilance hits&lt;/li&gt;
  &lt;li&gt;Beaumont et al. (2017)&lt;sup id=&quot;fnref:3:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;: total energy output (kJ), substrate oxidation&lt;/li&gt;
  &lt;li&gt;Lara et al. (2019)&lt;sup id=&quot;fnref:11:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;: peak power, VO2 max, Wingate test peak power, Wingate test mean power&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Smith et al. and Lara et al. did not provide numeric standard errors (and Lara et al. did not provide means) but did provide plots, so I estimated the numeric values by counting the number of pixels using Gimp.&lt;/p&gt;

&lt;p&gt;Note: Ignoring metrics with a small or negative baseline benefit could bias the results toward making caffeine habituation look worse. This process selects for observations where the baseline benefit was large by chance, making the habituation benefit look smaller by comparison.&lt;/p&gt;

&lt;h2 id=&quot;changelog&quot;&gt;Changelog&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;2024-09-02: I previously described 44% (the mean exercise benefit to caffeine) as “about 1/3”. This makes it sound like the difference between the observed cognitive vs. exercise benefits is about 1/6 (17 percentage points) when in fact it’s only 4 percentage points. I changed the description to “a little under 1/2”.&lt;/li&gt;
  &lt;li&gt;2025-06-06: Fixed typos.&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;And I didn’t find it plausible to begin with. If caffeine doesn’t make non-users more alert, why would people start taking caffeine in the first place? &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Rogers, P. J., Heatherley, S. V., Mullings, E. L., &amp;amp; Smith, J. E. (2013). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/23108937/&quot;&gt;Faster but not smarter: effects of caffeine and caffeine withdrawal on alertness and performance.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:2:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:2:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:2:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hewlett, P., &amp;amp; Smith, A. (2007). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/17514640/&quot;&gt;Effects of repeated doses of caffeine on performance and alertness: new data and secondary analyses.&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:6:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Haskell, C. F., Kennedy, D. O., Wesnes, K. A., &amp;amp; Scholey, A. B. (2005). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/15678363/&quot;&gt;Cognitive and mood improvements of caffeine in habitual consumers and habitual non-consumers of caffeine.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:9:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Smith, A., Sutherland, D., &amp;amp; Christopher, G. (2006). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/16910172/&quot;&gt;Effects of caffeine in overnight-withdrawn consumers and non-consumers.&lt;/a&gt; &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:10:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some ways self-selection could bias the results:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;People who are more naturally alert might not take caffeine because they don’t feel like they need it. This would make the habituated benefit look smaller. (That is, the LoPla group might perform better than the caffeine group because they’re naturally more alert, not because caffeine isn’t benefiting the caffeine group.)&lt;/li&gt;
        &lt;li&gt;People who don’t get much benefit from caffeine might not take it, making the baseline benefit look smaller (and thus retention look larger).&lt;/li&gt;
        &lt;li&gt;People who react strongly to caffeine might not take it (because it makes them jittery/anxious), making the baseline benefit look larger.&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The four metrics are: (1) simple reaction time (SRT); (2) choice reaction time (CRT); (3) recognition memory; (4) tapping speed. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Normally, you’d calculate joint likelihood as the product of the likelihood functions. But that only works if the functions are idependent. Since these four functions are all measuring the same group of people during the same experiment, I averaged them instead of multiplying them. The resulting joint likelihood understates the strength of the evidence, but I’m more concerned about overstating evidence than understating it. &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Alternatively, the mean likelihood equals the posterior expected value when using a uniform prior.&lt;/p&gt;

      &lt;p&gt;Statistical analyses commonly report the &lt;a href=&quot;https://en.wikipedia.org/wiki/Maximum_likelihood&quot;&gt;maximum likelihood&lt;/a&gt; but not the mean likelihood. I believe people ought to use the mean likelihood instead.&lt;/p&gt;

      &lt;p&gt;For a symmetric distribution, the mean likelihood equals the maximum likelihood. But the distinction matters for caffeine retention because its likelihood function is skewed. If you compress the likelihood into a single value, that value should tilt toward whichever tail is fatter. The mean accounts for the fatness of the tails; the mode (maximum) does not.&lt;/p&gt;

      &lt;p&gt;For more on this, see McLeod, A. I., &amp;amp; Quenneville, B. (1999). &lt;a href=&quot;https://arxiv.org/pdf/1611.00884.pdf&quot;&gt;Mean likelihood estimators.&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Really we should model them as &lt;a href=&quot;https://en.wikipedia.org/wiki/Behrens%E2%80%93Fisher_distribution&quot;&gt;Behrens-Fisher distributions&lt;/a&gt; but a t-distribution is close enough. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Simon, F. E., &amp;amp; Ftorek, J. (2022). A new method for approximating the distribution of the ratio of two independent normal random variables. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Beaumont, R., James, L. J., &amp;amp; Davison, G. (2017). &lt;a href=&quot;https://doi.org/10.1080/02640414.2016.1241421&quot;&gt;Chronic ingestion of a low dose of caffeine induces tolerance to the performance benefits of caffeine.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:3:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:3:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:3:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lara, Beatriz; Ruiz-Moreno, Carlos; Salinero, Juan José; Del Coso, Juan; Sandbakk, Øyvind  (2019). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6343867/&quot;&gt;Time course of tolerance to the performance benefits of caffeine.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:11:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:11:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I say that as if there were two different groups, but really this was a &lt;a href=&quot;https://en.wikipedia.org/wiki/Crossover_study&quot;&gt;crossover study&lt;/a&gt;. The experimenters ran two experiments in succession on the same group of people. They randomly assigned subjects to the caffeine or placebo groups on the first experiment, and then switched them for the second experiment. Effect sizes were determined by comparing each person’s performance with caffeine on day N to their performance with placebo on day N in the other trial.&lt;/p&gt;

      &lt;p&gt;This study had by far the best methodology of any of them, too bad it only got 11 participants. (I’m not surprised, considering that (a) they restricted their sample to physically-active participants and (b) the participants had to repeatedly perform grueling exercises.) &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Shi, D., Nikodijevic, O., Jacobson, K. A., &amp;amp; Daly, J. W. (1993). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3437321/&quot;&gt;Chronic caffeine alters the density of adenosine, adrenergic, cholinergic, GABA, and serotonin receptors and calcium channels in mouse brain.&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;It would be more accurate to say that caffeine causes rats to grow more receptors, but I assume human brains work the same way.&lt;/p&gt;

      &lt;p&gt;Perhaps someday we will figure out how to augment rat brains to understand writing and then a rat will read this article. If that’s the case, greetings future rat folk, I hope you find this article helpful even though I only cited studies on human subjects.&lt;/p&gt;

      &lt;p&gt;(Future cybernetically-augmented rat: “Why do news articles always present scientific findings as if they apply to rats when they’ve only done the studies on humans?”) &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I calculated this by setting my prior to a &lt;a href=&quot;https://en.wikipedia.org/wiki/Beta_distribution&quot;&gt;beta distribution&lt;/a&gt; with alpha=1 and beta=1.5, which represents a low-information prior with a slight tilt toward 0. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Avoiding Caffeine Tolerance</title>
				<pubDate>Sat, 02 Mar 2024 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2024/03/02/caffeine_tolerance/</link>
				<guid isPermaLink="true">http://mdickens.me/2024/03/02/caffeine_tolerance/</guid>
                <description>
                  
                  
                  
                  &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;Caffeine improves cognition&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; and exercise performance&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;. But if you take caffeine every day, over time it becomes less effective.&lt;/p&gt;

&lt;p&gt;What if instead of taking caffeine every day, you only take it intermittently—say, once every 3 days? How often can most people take caffeine without developing a tolerance?&lt;/p&gt;

&lt;p&gt;The scientific literature on this question is sparse. Here’s what I found:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Experiments on rats found that rats who took caffeine every other day did not develop a tolerance. There are no experiments on humans. There are no experiments that use other intermittent dosing frequencies (such as once every 3 days).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Internet forum users report that they can take caffeine on average once every 3 days without developing a tolerance. But there’s a lot of variation between individuals.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This post will cover:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;The motivation for intermittent dosing&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;A review of the experimental research on the effect of taking caffeine intermittently (TLDR: there’s almost no experimental research)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;A review of self-reports from the online &lt;a href=&quot;https://en.wikipedia.org/wiki/Nootropic&quot;&gt;nootropics&lt;/a&gt; community&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Intermittent dosing vs. taking caffeine every day&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Epistemic status: I have never formally studied pharmacology or psychopharmacology—I’ve barely studied biology for that matter. But somehow nobody has ever looked into intermittent caffeine dosing to a degree that I found satisfactory&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;, so I did it myself.&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Updated 2024-03-08 to reference three more studies. h/t Gavin Leech for making me aware of them—see also his &lt;a href=&quot;https://www.gleech.org/stims&quot;&gt;article&lt;/a&gt; on caffeine tolerance, which I think explains the concepts better than mine does.&lt;/em&gt;&lt;/p&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#why-intermittent-dosing&quot; id=&quot;markdown-toc-why-intermittent-dosing&quot;&gt;Why intermittent dosing?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#experimental-evidence-on-intermittent-dosing&quot; id=&quot;markdown-toc-experimental-evidence-on-intermittent-dosing&quot;&gt;Experimental evidence on intermittent dosing&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#individual-self-reports&quot; id=&quot;markdown-toc-individual-self-reports&quot;&gt;Individual self-reports&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#is-intermittent-dosing-better-than-taking-caffeine-every-day&quot; id=&quot;markdown-toc-is-intermittent-dosing-better-than-taking-caffeine-every-day&quot;&gt;Is intermittent dosing better than taking caffeine every day?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-a-a-study-i-would-like-to-see&quot; id=&quot;markdown-toc-appendix-a-a-study-i-would-like-to-see&quot;&gt;Appendix A: A study I would like to see&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-b-pre-registration-for-a-caffeine-self-experiment&quot; id=&quot;markdown-toc-appendix-b-pre-registration-for-a-caffeine-self-experiment&quot;&gt;Appendix B: Pre-registration for a caffeine self-experiment&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;why-intermittent-dosing&quot;&gt;Why intermittent dosing?&lt;/h2&gt;

&lt;p&gt;The standard remedy for caffeine tolerance is to stop caffeine cold turkey for one to two weeks (or longer) to reset your body. Plenty of evidence suggests that this strategy works&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;, but I’m not happy with it.&lt;/p&gt;

&lt;p&gt;The reason I’m not happy with it has to do with why I take caffeine: I want to improve cognition and exercise performance. If I stop having caffeine for two weeks, am I also supposed to take two weeks off work? Or spend two weeks being tired all day and getting nothing done? That sounds like a waste of time. Am I supposed to stop exercising for two weeks? That sounds unhealthy.&lt;/p&gt;

&lt;p&gt;Taking caffeine intermittently makes more sense to me. If I take caffeine 3 times a week, that works a lot better:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I can do the demanding high-effort work on caffeine days and do easier work on the non-caffeine days.&lt;/li&gt;
  &lt;li&gt;I can do high-intensity exercise on caffeine days, and rest or do light cardio on the other days.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But I’d rather take caffeine 4–5 days a week if I can get away with it. And if I’ve unknowingly developed a tolerance, I’d rather scale back to once or twice a week. So I’d like to know the maximum frequency that doesn’t dampen caffeine’s benefits.&lt;/p&gt;

&lt;h2 id=&quot;experimental-evidence-on-intermittent-dosing&quot;&gt;Experimental evidence on intermittent dosing&lt;/h2&gt;

&lt;p&gt;There are no studies on humans that administered caffeine less frequently than once a day. But there are a few studies on rats, which all found roughly the same result:&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Rats who take caffeine every day develop a tolerance after 1–2 weeks.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Rats who take caffeine every other day do not develop a measurable tolerance after 2 weeks.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some concerns with these studies:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;They used rats, not humans. Humans are different in lots of ways, the most obvious being that we have slower metabolisms. A two-day abstinence for a rat might be the metabolic equivalent of a 6 to 10 day abstinence for a human. (Counterpoint: When taking caffeine daily, rats and humans develop tolerance at about the same rate.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The studies only lasted 14 days. It’s possible that taking caffeine on alternating days still builds up a tolerance, but it takes longer than when taking caffeine daily (perhaps twice as long?).&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Three of the four experiments were run by approximately the same research team, so any mistakes they made probably got repeated across all three of those experiments.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;individual-self-reports&quot;&gt;Individual self-reports&lt;/h2&gt;

&lt;p&gt;By far the best evidence I could find on how to avoid tolerance in humans was a 2021 &lt;a href=&quot;https://www.reddit.com/poll/m94niq?utm_name=caffeinecycling&quot;&gt;reddit poll&lt;/a&gt; in the &lt;a href=&quot;https://old.reddit.com/r/caffeinecycling/&quot;&gt;/r/caffeinecycling&lt;/a&gt; subreddit. An internet poll isn’t great evidence, but I’ll take what I can get.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/caffeine-cycling-poll.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(See footnote for the poll results in text format.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;Among 84 respondents, at least 48 reported taking caffeine intermittently. 54% of those 48 reported that they do not develop a tolerance when taking caffeine every third day. And on average, they preferred to wait 3.58 days between doses.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This suggests that:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;On average, people can take caffeine every 3 days or so.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;There’s a lot of variation between individuals, so you personally probably should take caffeine more or less often than that.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some concerns with this poll:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;This was based on self-reporting, not on any objective metric of performance.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;It’s a biased sample: respondents found the poll because they were already interested in caffeine cycling. (I’d guess this poll over-represents people who develop caffeine tolerance quickly because people who don’t develop much tolerance won’t bother to cycle caffeine, and won’t go to a caffeine cycling subreddit.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The rate of tolerance buildup depends on the size of the dose, but we don’t know anything about how much caffeine the poll respondents were taking.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I searched online nootropics forms (&lt;a href=&quot;https://old.reddit.com/r/Nootropics/&quot;&gt;reddit.com/r/Nootropics/&lt;/a&gt; and &lt;a href=&quot;https://www.longecity.org/&quot;&gt;longecity.org&lt;/a&gt;) for anecdotes on commenters’ preferred caffeine frequency. The results broadly align with the &lt;a href=&quot;https://old.reddit.com/r/caffeinecycling&quot;&gt;/r/caffeinecycling&lt;/a&gt; poll; see footnote for a list of anecdotes.&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;is-intermittent-dosing-better-than-taking-caffeine-every-day&quot;&gt;Is intermittent dosing better than taking caffeine every day?&lt;/h2&gt;

&lt;p&gt;If you take caffeine every day, you will develop a tolerance, but your performance might not fully revert to baseline. Could you get more benefit from taking caffeine daily than from cycling it? To answer that question, first we need to ask: How much ongoing benefit can you expect to get if you take caffeine daily?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Update 2024-03-29: I wrote a longer post about this question: &lt;a href=&quot;/2024/03/29/does_caffeine_stop_working/&quot;&gt;Does Caffeine Stop Working?&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Many studies have examined this question, but almost all of them are methodologically flawed to the point of being useless.&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;I found exactly two studies (on humans) with strong methodology: Beaumont et al. (2017)&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; and Lara et al. (2019)&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;. These studies gave participants either caffeine or placebo every day for 28 and 20 days (respectively) and tested their exercise performance before and after. The experiments both found that:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Taking caffeine regularly decreased its potency, and this effect was statistically significant.&lt;/li&gt;
  &lt;li&gt;Habituated users still retained 1/3 to 1/2 of the benefit of caffeine, but this effect was not statistically significant. (The studies could not rule out the possibility that habituated users saw zero positive effect from caffeine.)&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I found another few studies with decent but not great methodology: &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/23108937/&quot;&gt;Rogers et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;, &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/15678363/&quot;&gt;Haskell et al. (2005)&lt;/a&gt;&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt;, and &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/16910172/&quot;&gt;Smith et al. (2006)&lt;/a&gt;&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;. These studies divided participants into caffeine non-users/low users versus high users. Then they randomly administered either caffeine or placebo and tested participants’ performance on various cognitive tests.&lt;/p&gt;

&lt;p&gt;This methodology isn’t perfect because low and high users might differ in ways that could bias the results. (For more discussion of this possibility, see &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/23108937/&quot;&gt;Rogers et al. (2013)&lt;/a&gt;.) It would be better to randomize participants to take either caffeine or placebo for several weeks. Methodological limitations aside, these three studies suggest that high caffeine users do retain some benefits, although the exact amount varied depending on the study and the measurement used, from 28% to 315% (i.e., a habituated user experiences a 3x &lt;em&gt;greater&lt;/em&gt; benefit than a naive user).&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Some studies on rats, cited previously&lt;sup id=&quot;fnref:14:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:15:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:16:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:17:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;, gave rats caffeine daily for 14 days and found that they retained somewhere between none and all of the performance benefits of caffeine, depending on the study. (Astute readers will notice that this doesn’t narrow things down much.)&lt;/p&gt;

&lt;p align=&quot;center&quot;&gt;***&lt;/p&gt;

&lt;p&gt;Suppose caffeine retains 1/3 of its benefit when you take it every day—that’s my best guess given the available evidence. Compare that to taking caffeine only once every 3 days. Either way it’s about the same: either you’re getting 1/3 of the benefit every day, or the full benefit on 1 out of 3 days.&lt;/p&gt;

&lt;p&gt;Except they’re not really the same because some days matter more than others. For example, if you exercise 4 days a week and you take caffeine on 2 of those days, then you’re only taking caffeine on 2/7 days of the week, but you’re getting half the exercise benefit, which is better. For that reason, I would prefer taking caffeine intermittently over taking it daily.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Prior to writing this article, I was taking caffeine 3 days a week, on Monday, Wednesday, and Friday. I wanted to know if I could take caffeine more often without developing a tolerance. My research tentatively suggests:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;My approach was reasonable and there’s not compelling evidence for taking caffeine more or less often than I do currently.&lt;/li&gt;
  &lt;li&gt;If I do experiment with taking caffeine more frequently, there’s a good chance that I will develop a tolerance.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;appendix-a-a-study-i-would-like-to-see&quot;&gt;Appendix A: A study I would like to see&lt;/h2&gt;

&lt;p&gt;I would like to see a randomized controlled trial on the effectiveness of caffeine when taken intermittently. If I had a big enough budget, I would design the study like this:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Break participants into at least three groups:
    &lt;ol&gt;
      &lt;li&gt;A placebo-control group who takes a placebo every day.&lt;/li&gt;
      &lt;li&gt;A caffeine-control group who takes caffeine every day. (This group shows what fully-habituated users look like.)&lt;/li&gt;
      &lt;li&gt;One or more experimental groups that take caffeine intermittently. If we can get a big enough sample size, have separate groups for a variety of dosing frequencies.&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;Following the methodology of Beaumont et al. (2017)&lt;sup id=&quot;fnref:6:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;, pre-test every group on athletic performance and/or cognition both when taking caffeine and when taking a placebo.&lt;/li&gt;
  &lt;li&gt;Administer caffeine or placebo for at least 28 days (longer is better).&lt;/li&gt;
  &lt;li&gt;At the end of the experiment, administer caffeine to all participants and test them again.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The study would then look at the following outcomes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Measure the benefit of caffeine as the performance on the final caffeinated test minus the performance on the two pre-tests.
    &lt;ul&gt;
      &lt;li&gt;If a particular group performs better on the final test than on the placebo pre-test, that means they did &lt;em&gt;not&lt;/em&gt; develop complete caffeine tolerance.&lt;/li&gt;
      &lt;li&gt;If a group performs &lt;em&gt;worse&lt;/em&gt; on the final test than on the caffeine pre-test, that means they developed at least &lt;em&gt;some&lt;/em&gt; tolerance.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Measure the performance of each group relative to the placebo-control group and the caffeine-control group.
    &lt;ul&gt;
      &lt;li&gt;If an experimental group outperforms the caffeine-control group on the final test, that means intermittent dosing has succeeded at reducing tolerance formation.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Which group gets the greatest total benefit as measured by performance multiplied by dosing frequency?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we wanted to make the study more complicated, we could follow the design of Lara et al. (2019) and test the participants frequently throughout the study. That would let us run a regression on the course of caffeine tolerance and also compare the performance of intermittent caffeine users on caffeine days vs. placebo days. Or we could divide up the groups by dosage, giving some people 1.5 mg/kg and others 3 mg/kg (or some other dosage).&lt;/p&gt;

&lt;p&gt;If I were on a budget, I’d scale down the study to only two groups: a daily-caffeine group and an intermittent-caffeine group. With no placebo group, we don’t know how well the intermittent group performs relative to baseline, but we can compare their tolerance development relative to the caffeine group, which I believe is more important.&lt;/p&gt;

&lt;h2 id=&quot;appendix-b-pre-registration-for-a-caffeine-self-experiment&quot;&gt;Appendix B: Pre-registration for a caffeine self-experiment&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Updated 2024-03-29 to fix typos and improve wording.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I am going to conduct the following experiment on myself, in which I test whether taking caffeine intermittently decreases its effectiveness.&lt;/p&gt;

&lt;p&gt;I will test my reaction time by taking the &lt;a href=&quot;https://humanbenchmark.com/tests/reactiontime&quot;&gt;humanbenchmark.com test&lt;/a&gt; twice in a row. One test consists of 5 reaction events, so this will give a total of 10 reaction events, taking my reaction time at that moment as the average of the 10 reaction times. I will take the test using the same computer, monitor, and mouse so that latency is consistent.&lt;/p&gt;

&lt;p&gt;I will measure the effect of caffeine using a reaction time test because (a) caffeine is known to improve reaction time, (b) reaction time is easy to test, (c) it’s unlikely to improve with practice so it makes for a good consistent test variable, and (d) it’s hard to placebo-effect myself into improving my reaction time (which is important because I can’t blind myself).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 1. Calibration phase.&lt;/strong&gt; Take a reaction time test twice a day (following the same schedule as described in phase 3 below). Continue for four weeks. Plot a regression of my reaction time across the four weeks. The purpose of the calibration phase is to ensure that my reaction time does not improve from practicing every day.&lt;/p&gt;

&lt;p&gt;During this phase I will continue drinking coffee 3 times per week (as I have been doing), but I won’t measure my reaction time on caffeine vs. non-caffeine days because the only purpose of this phase is to check that my reaction time doesn’t improve with practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2. Abstinence phase.&lt;/strong&gt; Abstain from caffeine for one week (9 days total, in between the last Friday of the calibration phase and the first Monday of the test phase). Test reaction time during this week. If I was habituated to caffeine in phase 1 then my reaction time should improve over the course of phase 2 as my tolerance wears off. Measure the slope of reaction times across days during the abstinence phase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3. Test phase.&lt;/strong&gt; Resume drinking coffee 3 days a week and continue for four weeks. Take a reaction time test twice a day, at (say) 8am and then 10am—the exact time doesn’t matter, but the first test is before having coffee and the second is after coffee. (Or on days when I don’t have coffee, take the test at times before and after I when would have had coffee.)&lt;/p&gt;

&lt;p&gt;The primary variable of interest is the difference in reaction time on caffeine days before and after taking caffeine. I will also look at the difference in the 8am vs. 10am tests on non-caffeine days to ensure that reaction time doesn’t get better on the second test even without caffeine.&lt;/p&gt;

&lt;p&gt;After phase 3 completes, I will &lt;strong&gt;measure the following outcomes:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Slope of the difference in reaction time between pre-caffeine and post-caffeine tests on caffeine days. (A negative slope indicates diminishing effect of caffeine.)&lt;/li&gt;
  &lt;li&gt;Slope of reaction time across post-caffeine tests. (A positive slope suggests caffeine tolerance. Remember that positive = longer reaction time = bad.)&lt;/li&gt;
  &lt;li&gt;Slope of reaction time across pre-caffeine tests. (A positive slope suggests caffeine withdrawal symptoms.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Check if the slopes are statistically significant. If the slopes for variables #2 and #3 are both ascending but only one is statistically significant, or they are jointly statistically significant but individually insignificant, that suggests that I did develop a tolerance but that the experiment was underpowered. If both slopes are insignificant, that suggests no tolerance (or that the experiment was underpowered). If slope #2 or #3 is &lt;em&gt;descending&lt;/em&gt; and statistically significant, that suggests something weird is going on and my experiment is probably invalid.&lt;/p&gt;

&lt;p&gt;If the slopes are all approximately zero, that means I can probably have more caffeine before I develop a tolerance. In that case, conduct phase 4.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4. High-caffeine test phase.&lt;/strong&gt; Increase coffee intake to 4 days a week for four weeks. Observe the same variables as phase 3.&lt;/p&gt;

&lt;p&gt;This experiment is not blinded and it’s probably underpowered, but it’s better than nothing.&lt;/p&gt;

&lt;p&gt;Thank you to Ansh Shukla for reviewing a draft of this caffeine self-experiment.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Astrid Nehlig (2010). &lt;a href=&quot;https://gwern.net/doc/nootropic/caffeine/2010-nehlig.pdf&quot;&gt;Is Caffeine a Cognitive Enhancer?&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Erica R Goldstein, Tim Ziegenfuss, Doug Kalman, Richard Kreider, Bill Campbell, Colin Wilborn, Lem Taylor, Darryn Willoughby, Jeff Stout, Sue Graves, Robert Wildman, John L Ivy, Marie Spano, Abbie E Smith &amp;amp; Jose Antonio (2010). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/20205813/&quot;&gt;International society of sports nutrition position stand: caffeine and performance.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;There are a few posts on nootropics forums, particularly &lt;a href=&quot;https://old.reddit.com/r/caffeinecycling/&quot;&gt;reddit.com/r/caffeinecycling/&lt;/a&gt;, but they don’t go into much depth. There are no academic papers that discuss intermittent dosing in humans. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I originally just wanted to answer the question for myself, but by the time I had read through about 10 studies and a bunch of forum posts, I figured I might as well write something up about it. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Laura Juliano &amp;amp; Roland Griffiths (2004). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/15448977/&quot;&gt;A critical review of caffeine withdrawal: empirical validation of symptoms and signs, incidence, severity, and associated features.&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;C. J. Meliska, R. E. Landrum &amp;amp; T. A. Landrum (1990). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/2320659/&quot;&gt;Tolerance and sensitization to chronic and subchronic oral caffeine: effects on wheelrunning in rats.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:14:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Omar Cauli, Annalisa Pinna, Valentina Valentini &amp;amp; Micaela Morelli (2003). &lt;a href=&quot;https://www.nature.com/articles/1300240&quot;&gt;Subchronic Caffeine Exposure Induces Sensitization to Caffeine and Cross-Sensitization to Amphetamine Ipsilateral Turning Behavior Independent from Dopamine Release.&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:15:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;N. Simola, E. Tronci, A. Pinna &amp;amp; M. Morelli (2006). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/16874467/&quot;&gt;Subchronic-intermittent caffeine amplifies the motor effects of amphetamine in rats.&lt;/a&gt; &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:16:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Omar Cauli &amp;amp; Micaela Morelli (2002). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/12122482/&quot;&gt;Subchronic caffeine administration sensitizes rats to the motor-activating effects of dopamine D(1) and D(2) receptor agonists.&lt;/a&gt; &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:17:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Poll results for “What is your optimal time for cycling caffeine without tolerance development?”&lt;/p&gt;

      &lt;table&gt;
        &lt;thead&gt;
          &lt;tr&gt;
            &lt;th&gt;Votes&lt;/th&gt;
            &lt;th&gt;Answer&lt;/th&gt;
          &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
          &lt;tr&gt;
            &lt;td&gt;7&lt;/td&gt;
            &lt;td&gt;At least every second day&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;19&lt;/td&gt;
            &lt;td&gt;At least every third day&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;4&lt;/td&gt;
            &lt;td&gt;At least every fourth day&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;18&lt;/td&gt;
            &lt;td&gt;At least a week break between caffeine doses&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;13&lt;/td&gt;
            &lt;td&gt;Caffeine for weeks, then weeks off (“long cycling”)&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;24&lt;/td&gt;
            &lt;td&gt;Other&lt;/td&gt;
          &lt;/tr&gt;
        &lt;/tbody&gt;
      &lt;/table&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;That’s with the assumption that all the “at least a week” respondents take caffeine exactly once a week. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;A list of anecdotes, paraphrased:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;&lt;a href=&quot;https://www.longecity.org/forum/topic/40228-avoiding-caffeine-tolerance/#entry434181&quot;&gt;christopherforums&lt;/a&gt;: I don’t develop tolerance when taking every 3–4 days, but do develop when taking every other day.&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://old.reddit.com/r/Nootropics/comments/5obfq7/optimal_caffeine_schedule/dcia6ci/&quot;&gt;/u/(deleted)​&lt;/a&gt;: 3 days a week is probably sustainable for 3–6 weeks before I’d want to take a week break. I can do once every 3 days indefinitely without building tolerance.&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://old.reddit.com/r/Nootropics/comments/ls7ay6/the_caffeine_enigma_does_habitual_longterm/&quot;&gt;/u/FreemanOfficial&lt;/a&gt;: I develop some tolerance when taking every second day, but no tolerance when taking every third day.&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://old.reddit.com/r/Nootropics/comments/ls7ay6/the_caffeine_enigma_does_habitual_longterm/gopsvqd/&quot;&gt;/u/Lceus&lt;/a&gt;: For 3 months, I took caffeine for 3 days and then abstained for 4 days. The first caffeine days always had a strong effect, with the next 2 days having a positive but less enjoyable effect.&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://old.reddit.com/r/Nootropics/comments/x1icgx/do_you_cycle_caffeine_to_avoid_tolerance_and/ime6cls/&quot;&gt;/u/odd1e&lt;/a&gt;: I do 2x or 3x a week. If I do 3x for too long, a tolerance starts to build up.&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://old.reddit.com/r/slatestarcodex/comments/19efdh6/is_caffeine_consumption_a_habit_worth_picking_up/kjchocm/&quot;&gt;/u/owlthatissuperb&lt;/a&gt;: IME the best strategy is to drink coffee 2–3x per week.&lt;/li&gt;
        &lt;li&gt;My own personal experience: I take caffeine either 3x or 3.5x a week (the extra 0.5x means I take a half-dose on one day). I don’t believe I’ve developed (much) tolerance because I don’t experience any withdrawal symptoms when I don’t take caffeine. I haven’t experimented with other dosing schedules, except that I used to drink coffee every day and I definitely developed a tolerance from that.&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;I also found anecdotes of people reporting that they intermittently take caffeine but without saying anything about why they do it that way, so I did not include those. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Most randomized controlled trials on the benefits of caffeine cannot distinguish between two possibilities:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Daily caffeine users get a persistent benefit from taking caffeine.&lt;/li&gt;
        &lt;li&gt;Daily caffeine users who abstain from caffeine experience withdrawal symptoms, which hurts performance. Taking caffeine brings them back up to baseline performance, but they don’t do any better than they would have if they’d never developed a tolerance in the first place.&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;A literature review by James &amp;amp; Rogers (2005)&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt; discussed possible approaches for distinguishing these possibilities and looked at some studies that attempt these approaches. It concluded that the apparent benefits of caffeine are most likely due to reversing withdrawal symptoms. I believe this is an overly strong conclusion based on the studies they cited, and in fact the studies were simply too underpowered to show that caffeine has a positive effect for habituated users.&lt;/p&gt;

      &lt;p&gt;(The process of writing this article has significantly lowered my opinion of the field of psychopharmacology.) &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ross Beaumont, Philip Cordery, Mark Funnell, Stephen Mears, Lewis Jamesa &amp;amp; Phillip Watson (2017). &lt;a href=&quot;https://doi.org/10.1080/02640414.2016.1241421&quot;&gt;Chronic ingestion of a low dose of caffeine induces tolerance to the performance benefits of caffeine.&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:6:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Beatriz Lara, Carlos Ruiz-Moreno, Juan Jose Salinero &amp;amp; Juan Del Coso (2019). &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6343867/&quot;&gt;Time course of tolerance to the performance benefits of caffeine.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;An explanation of how I drew conclusions from these two studies:&lt;/p&gt;

      &lt;p&gt;Lara et al. (2019) gave one group a placebo and the other group caffeine every day for 20 days and tested their exercise performance every few days using various metrics such as maximum power output and &lt;a href=&quot;https://en.wikipedia.org/wiki/VO2_max&quot;&gt;VO2 max&lt;/a&gt; The caffeine group consistently outperformed the placebo group, but the performance difference between the two groups declined by between 1/3 and 2/3 by day 20 (with different metrics of performance declining by different amounts). The authors did not report significance tests on the change in marginal benefit of caffeine between day 1 and day 20 but it was consistently negative across all metrics.&lt;/p&gt;

      &lt;p&gt;Beaumont et al. (2016) pre-tested participants’ performance both with caffeine and with placebo. Then participants took either caffeine or placebo for 28 days and took another test along with a dose of caffeine. In this experiment:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;The caffeine group performed significantly worse on the final test than on the pre-test with caffeine.&lt;/li&gt;
        &lt;li&gt;The caffeine group performed non-significantly better on the final test than on the pre-test with placebo. This suggests that caffeine may have provided some benefit even with tolerance, but we can’t rule out the possibility that it had no benefit.&lt;/li&gt;
        &lt;li&gt;The maximum-likelihood possibility is that caffeine tolerance reduces its benefits by 66%. (In the pre-test, the caffeine group did an average of 38.4 kJ more work with caffeine than with placebo. In the post-test, they did 13.1 kJ more work than in the placebo pre-test and 25.3 kJ less than in the caffeine pre-test. The observed reduction in marginal power output is 25.3 / 38.4 = 66%.)&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;Taken together, these experiments suggest that caffeine retains something like 1/3 of its benefit even after habituation.&lt;/p&gt;

      &lt;p&gt;However, we should not over-update on these results. Lara et al. had only 11 participants and Beaumont et al. had 18 participants. Given the small &lt;em&gt;n&lt;/em&gt; and small effect sizes, we can’t draw strong conclusions from either of these two studies. And their methodologies might not generalize:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;They both gave participants 3 mg caffeine per kg bodyweight (which is around 200 mg for an average-sized person), so we don’t know what happens at other dosages. Some research suggests that users see best results at doses twice that high.&lt;/li&gt;
        &lt;li&gt;The experiments lasted 20 and 28 days, respectively, so we don’t know if caffeine tolerance continues to build up over several months.&lt;/li&gt;
        &lt;li&gt;The participants they recruited were not representative of the general population.&lt;/li&gt;
        &lt;li&gt;They only looked at benefits to exercise performance, not cognition.&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Peter Rogers, Susan Heatherley, Emma Mullings &amp;amp; Jessica Smith (2013). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/23108937/&quot;&gt;Faster but not smarter: effects of caffeine and caffeine withdrawal on alertness and performance.&lt;/a&gt; &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Crystal Haskell, David Kennedy, Keith Wesnes &amp;amp; Andrew Scholey (2005). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/15678363/&quot;&gt;Cognitive and mood improvements of caffeine in habitual consumers and habitual non-consumers of caffeine.&lt;/a&gt; &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Andrew Smith, Gary Christopher &amp;amp; David Sutherland (2006). &lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/16910172/&quot;&gt;Effects of caffeine in overnight-withdrawn consumers and non-consumers.&lt;/a&gt; &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;None of these studies directly reported the information we care about, but I was able to figure some things out from what they did report.&lt;/p&gt;

      &lt;p&gt;We want to know the benefit of caffeine for a habitual user. We can measure that by comparing the performance of high-caffeine users after taking caffeine versus low-caffeine users after taking placebo. If high users develop a complete caffeine tolerance then these two groups should perform the same—when a habitual user takes caffeine, it’s as if they’ve taken nothing.&lt;/p&gt;

      &lt;p&gt;If we want to know exactly how &lt;em&gt;much&lt;/em&gt; of the benefit habitual users retain, it gets a little bit complicated, but we can still figure it out. Start by deriving two variables from the groups’ performance:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;baseline &lt;strong&gt;benefit&lt;/strong&gt; of caffeine for naive users = low-caffeine users on caffeine – low-caffeine users on placebo&lt;/li&gt;
        &lt;li&gt;absolute &lt;strong&gt;retention&lt;/strong&gt; of benefits for habituated users = high-caffeine users on caffeine – low-caffeine users on placebo&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;Then calculate &lt;strong&gt;relative retention&lt;/strong&gt; as benefit / retention.&lt;/p&gt;

      &lt;p&gt;I performed this calculation using the primary performance metrics from each study (ignoring self-reported metrics like sleepiness) and got the following results.&lt;/p&gt;

      &lt;p&gt;Rogers et al. (2013) had by far the biggest sample size at 369 (compared to 48 and 25 for Haskell et al. and Smith et al., respectively). Rogers et al. did not report means or standard errors for the individual experimental groups, but it did report (ANOVA) significance tests for the four main performance metrics used by the study&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;. I can summarize the results as:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;Across all four metrics, high-caffeine users performed worse than low-caffeine users when given placebo, and 3/4 differences were statistically significant.&lt;/li&gt;
        &lt;li&gt;High-caffeine users on caffeine outperformed low-caffeine users on placebo on 3/4 metrics. 2 out of those 3 were statistically significant.&lt;/li&gt;
        &lt;li&gt;High-caffeine users on caffeine actually outperformed low-caffeine users on caffeine on (a different set of) 2 metrics (but these results were not significant).&lt;/li&gt;
        &lt;li&gt;From eyeballing the graphs in the paper, the four metrics had relative retentions of approximately 120%, 140%, –30%, and 80%, with the second and fourth retentions being significantly different from 0.&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;These results suggest that:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;Habitual users experience withdrawal symptoms when they stop taking caffeine.&lt;/li&gt;
        &lt;li&gt;Habitual users still retain some benefit above baseline—they do not become fully habituated to caffeine.&lt;/li&gt;
        &lt;li&gt;Possibly, habitual users do not lose &lt;em&gt;any&lt;/em&gt; of the benefits. (The results lean this way, but not strongly.)&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;The other two studies did report means, so I can explicitly quantify the relative rentention of the benefits of caffeine. In the tables below, I report the retention, benefit, and relative retention for the main metrics the studies used.&lt;/p&gt;

      &lt;p&gt;From Haskell et al. (2005): (RT = reaction time, RVIP = rapid visual information processing)&lt;/p&gt;

      &lt;table&gt;
        &lt;thead&gt;
          &lt;tr&gt;
            &lt;th&gt;Metric&lt;/th&gt;
            &lt;th&gt;Retention&lt;/th&gt;
            &lt;th&gt;Benefit&lt;/th&gt;
            &lt;th&gt;Retention %&lt;/th&gt;
          &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
          &lt;tr&gt;
            &lt;td&gt;simple RT&lt;/td&gt;
            &lt;td&gt;-17.64&lt;/td&gt;
            &lt;td&gt;-5.6&lt;/td&gt;
            &lt;td&gt;315%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;digit vigilance RT&lt;/td&gt;
            &lt;td&gt;-15.5&lt;/td&gt;
            &lt;td&gt;-16.2&lt;/td&gt;
            &lt;td&gt;96%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;RVIP false alarms&lt;/td&gt;
            &lt;td&gt;-0.5&lt;/td&gt;
            &lt;td&gt;-0.21&lt;/td&gt;
            &lt;td&gt;238%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;spatial memory&lt;/td&gt;
            &lt;td&gt;0.01&lt;/td&gt;
            &lt;td&gt;0.01&lt;/td&gt;
            &lt;td&gt;100%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;numeric memory RT&lt;/td&gt;
            &lt;td&gt;-11.92&lt;/td&gt;
            &lt;td&gt;-42.42&lt;/td&gt;
            &lt;td&gt;28%&lt;/td&gt;
          &lt;/tr&gt;
        &lt;/tbody&gt;
      &lt;/table&gt;

      &lt;p&gt;From Smith et al. (2006):&lt;/p&gt;

      &lt;table&gt;
        &lt;thead&gt;
          &lt;tr&gt;
            &lt;th&gt;Metric&lt;/th&gt;
            &lt;th&gt;Retention&lt;/th&gt;
            &lt;th&gt;Benefit&lt;/th&gt;
            &lt;th&gt;Retention %&lt;/th&gt;
          &lt;/tr&gt;
        &lt;/thead&gt;
        &lt;tbody&gt;
          &lt;tr&gt;
            &lt;td&gt;focused attention speed&lt;/td&gt;
            &lt;td&gt;-4.2&lt;/td&gt;
            &lt;td&gt;-8.6&lt;/td&gt;
            &lt;td&gt;49%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;categoric search RT&lt;/td&gt;
            &lt;td&gt;-5.5&lt;/td&gt;
            &lt;td&gt;-16.4&lt;/td&gt;
            &lt;td&gt;34%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;simple RT&lt;/td&gt;
            &lt;td&gt;-38.1&lt;/td&gt;
            &lt;td&gt;-41.4&lt;/td&gt;
            &lt;td&gt;92%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;repeated digits vigilance&lt;/td&gt;
            &lt;td&gt;2.13&lt;/td&gt;
            &lt;td&gt;3.32&lt;/td&gt;
            &lt;td&gt;64%&lt;/td&gt;
          &lt;/tr&gt;
        &lt;/tbody&gt;
      &lt;/table&gt;

      &lt;p&gt;(Both studies reported a lot more metrics, but they highlighted these as the most important.)&lt;/p&gt;

      &lt;p&gt;For the Haskell et al. data, on each of the five metrics, the difference between retention and benefit was not statistically significant at p=0.05, and the absolute retention was not significantly different from zero (digit vigilance RT came close with p = 0.054).&lt;/p&gt;

      &lt;p&gt;I can’t do significance tests on the Smith et al. data because the paper didn’t report standard errors. But given that it had a smaller sample size than Haskell, I’d guess none of the differences are statistically significant.&lt;/p&gt;

      &lt;p&gt;I wanted to do some Bayesian statistics where I’d calculate a posterior distribution over relative retention based on every individual metric from the three studies. But I realized it would be kind of pointless because Rogers et al. (2013) doesn’t report p-values (only significance / insignificance), so I can’t do odds updates for the study that should provide by far the biggest odds updates. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jack E. James &amp;amp; Peter J. Rogers (2005). &lt;a href=&quot;https://doi.org/10.1007/s00213-005-0084-6&quot;&gt;Effects of caffeine on performance and mood: withdrawal reversal is the most plausible explanation.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Rogers et al. (2013) tested 8 metrics, including 3 self-reported metrics and 5 performance tests. I ignored one of the performance tests because the low-caffeine placebo group outperformed the low-caffeine caffeine group, which means the concept of “retention” is not well-defined. (There is no benefit to retain if caffeine has a negative effect.) &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Utilitarianism Isn't About Doing Bad Things for the Greater Good. It's About Doing the Most Good</title>
				<pubDate>Tue, 15 Aug 2023 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2023/08/15/utilitarianism_isnt_about_doing_bad_things/</link>
				<guid isPermaLink="true">http://mdickens.me/2023/08/15/utilitarianism_isnt_about_doing_bad_things/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;In the eyes of popular culture (and in the eyes of many philosophy professors), the essence of utilitarianism is “it’s okay to do bad things for the greater good.” In my mind, that’s not the essence of utilitarianism. The essence is, “doing more good is better than doing less good.”&lt;/p&gt;

&lt;p&gt;Utilitarianism is about doing the most good. You don’t do the most good by fretting over weird edge cases where you can harm someone to help other people. You do the most good by picking up massive free wins like donating to &lt;a href=&quot;https://www.givewell.org/&quot;&gt;effective charities&lt;/a&gt; where money does 100x more good than it would if you spent it on yourself.&lt;/p&gt;

&lt;p&gt;(Richard Y. Chappell might call this &lt;a href=&quot;https://rychappell.substack.com/p/beneficentrism?s=r&quot;&gt;beneficentrism&lt;/a&gt;: “the view that promoting the general welfare is deeply important, and should be amongst one’s central life projects.” You can be a beneficentrist without being a utilitarian, but if you’re a utilitarian, you have to be a beneficentrist, and as a utilitarian, being a beneficentrist is much more important than being a “do bad things for the greater good”-ist.)&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>The United States Is Weird</title>
				<pubDate>Tue, 15 Aug 2023 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2023/08/15/USA_is_weird/</link>
				<guid isPermaLink="true">http://mdickens.me/2023/08/15/USA_is_weird/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;The United States is exceptional along many dimensions. Sometimes, people pick two of these dimensions and try to argue that one causes the other. And that’s probably true sometimes. But “the USA is #1 in the world on dimension X, and #1 on dimension Y” isn’t much evidence that X causes Y.&lt;/p&gt;

&lt;p&gt;The United States has:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;the 3rd largest &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_population&quot;&gt;population&lt;/a&gt;, and the largest population of any first-world country (by far)&lt;/li&gt;
  &lt;li&gt;the 3rd or 4th largest &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_area&quot;&gt;land area&lt;/a&gt; (depending on how you measure USA’s and China’s land)&lt;/li&gt;
  &lt;li&gt;the highest &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)&quot;&gt;GDP&lt;/a&gt; of any country&lt;/li&gt;
  &lt;li&gt;the highest &lt;a href=&quot;https://en.wikipedia.org/wiki/Median_income&quot;&gt;median income&lt;/a&gt;, the 8th highest &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP)_per_capita&quot;&gt;GDP per capita (PPP)&lt;/a&gt;, and the highest GDP per capita of any large country (the top 7 countries combined have a lower population than California)&lt;/li&gt;
  &lt;li&gt;unusually high &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_income_equality&quot;&gt;income inequality&lt;/a&gt; for a developed country&lt;/li&gt;
  &lt;li&gt;the highest &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_total_health_expenditure_per_capita&quot;&gt;healthcare expenditure&lt;/a&gt; per capita&lt;/li&gt;
  &lt;li&gt;the highest &lt;a href=&quot;https://en.wikipedia.org/wiki/Estimated_number_of_civilian_guns_per_capita_by_country&quot;&gt;gun ownership&lt;/a&gt; per capita (with double the gun ownership of the #2 country)&lt;/li&gt;
  &lt;li&gt;an unusually high &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_intentional_homicide_rate&quot;&gt;homicide rate&lt;/a&gt; for a developed country&lt;/li&gt;
  &lt;li&gt;the most &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_Nobel_laureates_by_country&quot;&gt;Nobel Prize winners&lt;/a&gt; winners (by a huge margin) and most &lt;a href=&quot;https://en.everybodywiki.com/List_of_countries_by_number_of_Fields_Medalists&quot;&gt;Fields Medalists&lt;/a&gt; (narrowly beating France)&lt;/li&gt;
  &lt;li&gt;the highest &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_obesity_rate&quot;&gt;obesity rate&lt;/a&gt; of any large country, and the 11th highest overall&lt;/li&gt;
  &lt;li&gt;the &lt;a href=&quot;https://www.usnews.com/education/best-global-universities/rankings&quot;&gt;most top universities&lt;/a&gt; (whatever that means) by a wide margin&lt;/li&gt;
  &lt;li&gt;unusually low &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_life_expectancy&quot;&gt;life expectancy&lt;/a&gt; for a developed country&lt;/li&gt;
  &lt;li&gt;an unusually high &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_sovereign_states_and_dependencies_by_total_fertility_rate&quot;&gt;fertility rate&lt;/a&gt; for a developed country&lt;/li&gt;
  &lt;li&gt;the 2nd most &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_exports&quot;&gt;exports&lt;/a&gt; (after China) and the most &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_imports&quot;&gt;imports&lt;/a&gt; (China is #2)&lt;/li&gt;
  &lt;li&gt;the most &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_countries_by_military_expenditures&quot;&gt;military expenditures&lt;/a&gt; (by a factor of 3) and the 2nd most &lt;a href=&quot;https://en.wikipedia.org/wiki/List_of_states_with_nuclear_weapons&quot;&gt;nuclear weapons&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(Those were just the examples I could come up with in an hour of research.)&lt;/p&gt;

&lt;p&gt;(I also looked at a few stats where I thought the USA might be exceptional, but it turned out not to be: IQ, educational attainment, infant mortality&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, and net immigration.)&lt;/p&gt;

&lt;p&gt;A lot of these facts are clearly intertwined—the fact that the US has the highest GDP is related to the facts that it has the highest military expenditures, the highest healthcare expenditures, and the 2nd highest exports. (But they’re not fully intertwined, because the US still has high military and healthcare expenditures relative to GDP.)&lt;/p&gt;

&lt;p&gt;For other facts, you can come up with narratives as to why they’re related—maybe the high obesity rate causes the low life expectancy, maybe high gun ownership causes the (relatively) high homicide rate. But maybe not. The United States is weird and I don’t have a great handle on why it’s weird (and, as far as I know, nobody else does either). Until someone comes up with a Grand Theory of National Weirdness, I’m reluctant to pick two ways in which the USA is weird and claim one causes the other.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some sources say the USA has high infant mortality. I didn’t look into this much, but the CIA World Factbook claims that the USA defines infant mortality more broadly than most countries, and if you adjust for this, infant mortality looks similar to most developed countries. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Should Patient Philanthropists Invest Differently?</title>
				<pubDate>Wed, 14 Sep 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/09/14/should_patient_philanthropists_invest_differently/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/09/14/should_patient_philanthropists_invest_differently/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; No.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;Confidence&lt;/a&gt;: Somewhat likely.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;Some philanthropists &lt;a href=&quot;https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/&quot;&gt;discount the future&lt;/a&gt; much less than normal people. For philanthropists with low discount rates, does this change how they invest their money? Can they do anything to take advantage of other investors’ high time discounting?&lt;/p&gt;

&lt;p&gt;We can answer this question in two different ways.&lt;/p&gt;

&lt;p&gt;Should low-discount philanthropists invest differently &lt;em&gt;in theory&lt;/em&gt;? No. &lt;a href=&quot;#theory&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Should low-discount philanthropists invest differently &lt;em&gt;in practice&lt;/em&gt;? The real world differs from the standard theoretical approach in a few ways. These differences suggest that low-discount philanthropists should favor risky and illiquid investments slightly more than high-discount investors do. But the difference is too small to matter in practice. &lt;a href=&quot;#practice&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/dvz2FWu2fTBG9E2oe/should-patient-philanthropists-invest-differently&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#background&quot; id=&quot;markdown-toc-background&quot;&gt;Background&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#theory&quot; id=&quot;markdown-toc-theory&quot;&gt;Theory&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#practice&quot; id=&quot;markdown-toc-practice&quot;&gt;Practice&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#1-long-term-mean-reversion&quot; id=&quot;markdown-toc-1-long-term-mean-reversion&quot;&gt;1. Long-term mean reversion&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#2-uncertainty-about-future-returns&quot; id=&quot;markdown-toc-2-uncertainty-about-future-returns&quot;&gt;2. Uncertainty about future returns&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#3-illiquidity-premium&quot; id=&quot;markdown-toc-3-illiquidity-premium&quot;&gt;3. Illiquidity premium&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#theoretical-illiquidity-premium&quot; id=&quot;markdown-toc-theoretical-illiquidity-premium&quot;&gt;Theoretical illiquidity premium&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#observed-illiquidity-premium&quot; id=&quot;markdown-toc-observed-illiquidity-premium&quot;&gt;Observed illiquidity premium&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#reconciling-predicted-and-observed-illiquidity-premiums&quot; id=&quot;markdown-toc-reconciling-predicted-and-observed-illiquidity-premiums&quot;&gt;Reconciling predicted and observed illiquidity premiums&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#a-note-on-conventional-wisdom&quot; id=&quot;markdown-toc-a-note-on-conventional-wisdom&quot;&gt;A note on conventional wisdom&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-a-pedantry&quot; id=&quot;markdown-toc-appendix-a-pedantry&quot;&gt;Appendix A: Pedantry&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-b-how-i-calculated-historical-optimal-leverage&quot; id=&quot;markdown-toc-appendix-b-how-i-calculated-historical-optimal-leverage&quot;&gt;Appendix B: How I calculated historical optimal leverage&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#tables-of-results&quot; id=&quot;markdown-toc-tables-of-results&quot;&gt;Tables of results&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-c-how-i-calculated-optimal-private-equity-allocation&quot; id=&quot;markdown-toc-appendix-c-how-i-calculated-optimal-private-equity-allocation&quot;&gt;Appendix C: How I calculated optimal private equity allocation&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-d-market-volatility-over-longer-holding-periods&quot; id=&quot;markdown-toc-appendix-d-market-volatility-over-longer-holding-periods&quot;&gt;Appendix D: Market volatility over longer holding periods&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-e-illiquidity-in-real-estate&quot; id=&quot;markdown-toc-appendix-e-illiquidity-in-real-estate&quot;&gt;Appendix E: Illiquidity in real estate&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#illiquidity-premium-in-100-year-leases&quot; id=&quot;markdown-toc-illiquidity-premium-in-100-year-leases&quot;&gt;Illiquidity premium in 100-year leases&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-f-transformative-changes-to-market-behavior&quot; id=&quot;markdown-toc-appendix-f-transformative-changes-to-market-behavior&quot;&gt;Appendix F: Transformative changes to market behavior&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;background&quot;&gt;Background&lt;/h1&gt;

&lt;p&gt;Many people have a positive pure time preference: they prefer present consumption over future consumption, regardless of any empirical facts. Philanthropists shouldn’t have a &lt;em&gt;pure&lt;/em&gt; positive discount rate, but they do have &lt;a href=&quot;https://mdickens.me/2020/07/03/estimating_discount_rate/&quot;&gt;some legitimate reasons&lt;/a&gt; to discount the future, including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;An extinction or economic collapse could render their money useless&lt;/li&gt;
  &lt;li&gt;Their money could get expropriated (taxed, seized, or stolen)&lt;/li&gt;
  &lt;li&gt;They might not be able to ensure that future generations continue spending their wealth in the ways they want&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(Some people talk about impatient vs. patient philanthropists who have positive vs. zero pure time preference, or &lt;a href=&quot;https://forum.effectivealtruism.org/posts/Eh7c9NhGynF4EiX3u/patient-vs-urgent-longtermism-has-little-direct-bearing-on&quot;&gt;urgent vs. patient longtermists&lt;/a&gt; who believe we do vs. don’t live in a particularly important time. For this essay, I’m talking about low-discount vs. high-discount philanthropists. Urgent longtermists usually have high discount rates and patient longtermists usually have low rates, but it’s not a 1:1 correspondence.)&lt;/p&gt;

&lt;p&gt;Many philanthropists don’t want to spend all their money at once; any money they don’t spend gets invested. And we would like to know how we should invest. For this essay, I will ask: how does optimal investment strategy change with discount rate? For philanthropists with unusually low discount rates, what should they do differently?&lt;/p&gt;

&lt;p&gt;Suppose philanthropists have a pool of wealth that they invest, and they consume a fixed proportion of wealth each year. (For a philanthropist, donating to charity counts as consumption.) They get some utility every time they spend money. What investment strategy maximizes expected utility? And how does that investment strategy vary as a function of discount rate?&lt;/p&gt;

&lt;h1 id=&quot;theory&quot;&gt;Theory&lt;/h1&gt;

&lt;p&gt;&lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Merton (1969)&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:37&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:37&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; derived a formula for how much leverage investors should use and how quickly they should spend down their assets (given some simplifying assumptions). Merton found that the optimal investment allocation does not depend on the investor’s discount rate.&lt;/p&gt;

&lt;p&gt;In theory, the optimal allocation only depends on three things:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;assets’ expected returns&lt;/li&gt;
  &lt;li&gt;assets’ covariances with each other&lt;/li&gt;
  &lt;li&gt;the investor’s relative risk aversion (RRA)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first two are properties of the assets themselves, not of the investors. The only factor that (theoretically) should cause two investors to behave differently is their degree of risk aversion.&lt;/p&gt;

&lt;p&gt;The optimal rate of &lt;em&gt;consumption&lt;/em&gt;&lt;sup id=&quot;fnref:29&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:29&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; does depend on one’s discount rate.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; But the discount rate does not affect the choice of investment.&lt;/p&gt;

&lt;p&gt;Merton’s proof depends on four key assumptions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Investors have constant relative risk aversion.&lt;/li&gt;
  &lt;li&gt;Asset prices follow log-normal distributions.&lt;/li&gt;
  &lt;li&gt;Asset returns are independent across time.&lt;/li&gt;
  &lt;li&gt;Assets can be traded continuously.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The assumption of constant relative risk aversion means that investors’ risk preferences do not change when their wealth increases or decreases. This might be false, but it’s close enough to true, so I won’t question this assumption for now.&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;If asset prices don’t follow log-normal distributions, then you cannot fully describe assets’ performance in terms of mean and covariance. This affects investment strategy (mostly by making us more concerned about tail risk), but it doesn’t affect how quickly we should consume.&lt;/p&gt;

&lt;p&gt;The third assumption—time independence—matters the most for our purposes. If investment returns tend to regress to the mean—in other words, if present and future returns are anticorrelated—then this changes how you should invest.&lt;/p&gt;

&lt;p&gt;Why? If returns are anticorrelated, then if you lose some money now, that increases your expectation that you’ll make it back later. It also decreases the probability that you’ll be able to compound unusually high returns for multiple years in a row. This matters regardless of discount rate, but it matters &lt;em&gt;more&lt;/em&gt; for investors with low discount rates because they care more about fluctuations in their future wealth.&lt;/p&gt;

&lt;p&gt;The fourth assumption—perfect liquidity—does not directly relate to investors’ discount rates, but it can still matter, as I discuss &lt;a href=&quot;#3-illiquidity-premium&quot;&gt;below&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;practice&quot;&gt;Practice&lt;/h1&gt;

&lt;p&gt;Low-discount philanthropists might invest differently for three reasons:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;long-term mean reversion&lt;/li&gt;
  &lt;li&gt;subjective uncertainty about future returns&lt;/li&gt;
  &lt;li&gt;illiquidity premium&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Long-term mean reversion suggests that:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Risk-averse (specifically, more risk-averse than logarithmic) low-discount investors should use more leverage than risk-averse high-discount investors.&lt;/li&gt;
  &lt;li&gt;Risk-tolerant (more risk-tolerant than logarithmic) low-discount investors should use &lt;em&gt;less&lt;/em&gt; leverage than risk-tolerant high-discount investors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(Confidence: Highly likely.)&lt;/p&gt;

&lt;p&gt;This crude drawing shows how leverage varies for low-discount vs. high-discount investors after accounting for long-term mean reversion:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/leverage-by-discount-crude.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;But this drawing overstates the size of the effect. (Or at least it would, if it had any labels or scale.) Empirically (based on historical returns), mean reversion only slightly changes the optimal amount of leverage, and probably doesn’t matter in practice. (Confidence: Likely.) &lt;a href=&quot;#1-long-term-mean-reversion&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over longer time periods, investors have greater subjective uncertainty about how markets will behave. This creates subjective long-term momentum, which has the opposite effect of mean reversion. (Confidence: Likely.) And, like with mean reversion, this effect only slightly differs between low- and high-discount investors. (Confidence: Likely.) &lt;a href=&quot;#2-uncertainty-about-future-returns&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If long-term illiquid investments earn significantly higher returns, then investors with low discount rates prefer them relative to high-discount investors. (Confidence: Highly likely.) &lt;a href=&quot;#3-illiquidity-premium&quot;&gt;[More]&lt;/a&gt; But empirical data suggests they don’t earn an illiquidity premium, so illiquid investments don’t look particularly compelling, regardless of discount rate. (Confidence: Likely.) &lt;a href=&quot;#observed-illiquidity-premium&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some investors may prefer illiquid investments for behavioral reasons due to their (perceived but not actual) reduced volatility, and for these investors, a lower discount rate suggests a higher allocation to private equity. &lt;a href=&quot;#reconciling-predicted-and-observed-illiquidity-premiums&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a final point, conventional wisdom says long-term investors should take more risk. Conventional wisdom is correct, not because long-term investors have a lower discount rate, but because they can expect to earn future income which reduces their effective risk. So this observation is not relevant to how low-discount philanthropists should invest. (Confidence: Almost certain.) &lt;a href=&quot;#a-note-on-conventional-wisdom&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;1-long-term-mean-reversion&quot;&gt;1. Long-term mean reversion&lt;/h2&gt;

&lt;p&gt;When Merton proved that asset allocation does not depend on time discounting, he assumed that investment returns are independent across time. But that’s not quite true. Historical evidence (weakly&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;) suggests that stocks exhibit long-term mean reversion. When prices go up a lot over a 3+ year period, they have a slight tendency to go back down. And when they go down over 3+ years, they have a similar tendency to rebound upward. That makes stocks a little less risky over longer holding periods.&lt;/p&gt;

&lt;p&gt;How does this affect low-discount investors relative to high-discount investors?&lt;/p&gt;

&lt;p&gt;Long-term mean reversion decreases the variance of an investment while keeping its &lt;a href=&quot;https://corporatefinanceinstitute.com/resources/knowledge/other/what-is-geometric-mean/&quot;&gt;geometric mean&lt;/a&gt; fixed, and decreasing the arithmetic mean.&lt;/p&gt;

&lt;p&gt;Low-discount investors feel the decreased volatility more strongly than high-discount investors do because they care more about their investment performance in later years. But for the same reason, they more strongly feel the decrease in arithmetic mean return.&lt;/p&gt;

&lt;p&gt;How this affects investors depends on their degree of risk aversion:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;For investors with logarithmic utility, maximizing utility is equivalent to maximizing geometric mean, so they don’t care at all about mean reversion.&lt;/li&gt;
  &lt;li&gt;For “risk-averse” investors (those with sub-logarithmic utility), they care more about volatility than about arithmetic mean, so they like mean reversion.&lt;/li&gt;
  &lt;li&gt;For “risk-tolerant” investors (those with super-logarithmic utility), they care more about arithmetic mean than about volatility, so they dislike mean reversion.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Risk-averse low-discount investors want &lt;em&gt;more&lt;/em&gt; leverage than risk-averse high-discount investors, while risk-tolerant low-discount investors want &lt;em&gt;less&lt;/em&gt; leverage than risk-tolerant high-discount investors.&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;More precisely:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;At logarithmic utility, discount rate does not matter.&lt;/li&gt;
  &lt;li&gt;For risk-averse investors, desired leverage decreases with discount rate.&lt;/li&gt;
  &lt;li&gt;For risk-tolerant investors, desired leverage increases with discount rate.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At least that’s the hypothesis.&lt;/p&gt;

&lt;p&gt;How strong are these effects? It turns out, pretty weak. I used data from &lt;a href=&quot;https://academic.oup.com/qje/article/134/3/1225/5435538&quot;&gt;The Rate of Return on Everything, 1870–2015&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; and the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt; to calculate historical utility-maximizing leverage for an equity investor in three environments: the US market from 1872 to 2015; the UK market from 1871 to 2015; and the developed-world markets (excluding the US) from 1990 to 2021.&lt;/p&gt;

&lt;p&gt;Each of these historical cases confirmed the hypothesis, but they showed a weak effect size.&lt;/p&gt;

&lt;p&gt;These two charts show excess leverage by discount rate for US and UK equities, compared to a baseline 3% discount rate:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/excess-leverage-by-discount-rate-us.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/excess-leverage-by-discount-rate-uk.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(I didn’t graph excess leverage for RRA &amp;lt; 1 because a solution usually does not exist.&lt;sup id=&quot;fnref:30&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;We can see from these charts that a zero-discount investor wants at most 4 percentage points more leverage than an investor with a 3% discount rate (1.78:1 vs. 1.74:1, respectively&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;). And the effect gets smaller as relative risk aversion (RRA) gets closer to 1.&lt;/p&gt;

&lt;p&gt;So, while long-term mean reversion does theoretically change how low-discount philanthropists should invest, the difference is effectively a rounding error. (I can’t even say whether philanthropists should prefer (say) 1.5:1 leverage or 2:1 leverage, much less 1.78:1 or 1.74:1.)&lt;/p&gt;

&lt;p&gt;(I should mention that, while equities exhibit long-term mean reversion, bonds appear to show long-term momentum, which would produce the opposite effect. However, historically, bonds did not have stable geometric means, and the historically optimal leverage for bonds showed the same results as for equities with respect to discount rate and RRA. And as with equities, the effect is too small to matter in practice.)&lt;/p&gt;

&lt;p&gt;See &lt;a href=&quot;#appendix-b-how-i-calculated-historical-optimal-leverage&quot;&gt;Appendix B&lt;/a&gt; for the details of my calculations.&lt;/p&gt;

&lt;h2 id=&quot;2-uncertainty-about-future-returns&quot;&gt;2. Uncertainty about future returns&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1540-6261.2012.01722.x&quot;&gt;Pastor &amp;amp; Stambaugh (2012)&lt;/a&gt;&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; showed that, due to uncertainty about average future returns, investors’ subjective volatility increases with time horizon. This produces the opposite effect of mean reversion: risk-averse low-discount investors want &lt;em&gt;less&lt;/em&gt; leverage than risk-averse high-discount investors, while risk-tolerant low-discount investors want &lt;em&gt;more&lt;/em&gt; leverage than risk-tolerant high-discount investors.&lt;/p&gt;

&lt;p&gt;Behold, another crudely-drawn graph:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/leverage-by-discount-crude-uncertainty.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;According to Pastor &amp;amp; Stambaugh, parameter uncertainty matters more than long-term mean reversion, but less than twice as much more. That means the combined impact of mean reversion + uncertainty has the opposite sign and a smaller magnitude than mean reversion alone. As we saw, the practical impact of mean reversion only varies slightly between low-discount and high-discount investors, so the impact of mean reversion + uncertainty varies even less.&lt;/p&gt;

&lt;p&gt;Pastor &amp;amp; Stambaugh did not look at parameter uncertainty across discount rates, so I did my own analysis. I tested historically optimal leverage with parameter uncertainty by adding an “uncertainty factor” to the average return. This uncertainty factor was a normally-distributed random number between –2% and 2% (with that range representing the 95% confidence interval of the true average return).&lt;/p&gt;

&lt;p&gt;As expected, the uncertainty factor does not affect optimal leverage for logarithmic-utility investors. For risk-averse investors, uncertainty decreases optimal leverage, and it decreases more for low-discount investors than for high-discount ones:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/excess-leverage-due-to-uncertainty-us.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(UK equities show the same pattern, but a weaker effect.&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;When incorporating both mean reversion and uncertainty, risk-averse low-discount investors still preferred to use more leverage than risk-averse high-discount investors, but by a smaller margin than they would given certainty:&lt;sup id=&quot;fnref:28&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:28&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/excess-uncertainty-adjusted-leverage-us.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(Notice how on this chart, the maximum excess leverage is ~0.03, while in the original chart with fixed parameters (“Excess Leverage by Discount Rate (US)”), the max was ~0.04.)&lt;/p&gt;

&lt;p&gt;UN equities show a similar effect, except that in some cases, low-discount investors want (slightly) &lt;em&gt;less&lt;/em&gt; leverage than high-discount investors:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/excess-uncertainty-adjusted-leverage-uk.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;For risk-tolerant investors (with super-logarithmic utility), adding uncertainty increases desired leverage, and the effect is larger for low-discount investors. However, it’s hard to quantify this effect because at low discount rates, sufficiently risk-tolerant investors want to use infinite leverage. (I would interpret this as evidence that philanthropists don’t have larger-than-logarithmic utility functions.)&lt;/p&gt;

&lt;p&gt;Note that Pastor &amp;amp; Stambaugh found that parameter uncertainty outweighed mean reversion, while I found the opposite. If my analysis had looked more like Pastor &amp;amp; Stambaugh’s, then we would get the opposite result after accounting for both mean reversion and parameter uncertainty. But it doesn’t really matter because the effect is small either way.&lt;/p&gt;

&lt;h2 id=&quot;3-illiquidity-premium&quot;&gt;3. Illiquidity premium&lt;/h2&gt;

&lt;p&gt;Low-discount philanthropists might not need to spend their money for a long time. Does that mean they should buy illiquid investments?&lt;/p&gt;

&lt;p&gt;I will focus on private equity because it’s the most straightforward type of illiquid investment, and it has a clear analogue in public equity. First I will consider what illiquidity premium we should expect in theory for private equity. Then I will look at the observed illiquidity premium, and then attempt to reconcile the two.&lt;/p&gt;

&lt;p&gt;See &lt;a href=&quot;#appendix-e-illiquidity-in-real-estate&quot;&gt;Appendix E&lt;/a&gt; for some discussion of illiquidity in real estate.&lt;/p&gt;

&lt;h3 id=&quot;theoretical-illiquidity-premium&quot;&gt;Theoretical illiquidity premium&lt;/h3&gt;

&lt;p&gt;In theory, should low-discount investors like illiquid investments more than high-discount investors do?&lt;/p&gt;

&lt;p&gt;People’s preferences for liquidity do not directly depend on discount rate, but they do depend on their consumption rate. If you want to spend a (relatively) large percentage of your capital each year, then you’re more reluctant to lock up a big chunk of money in illiquid investments. There’s a risk that the market tanks and you end up spending down too big a proportion of your liquid assets.&lt;/p&gt;

&lt;p&gt;All else equal, a higher discount rate means a higher spending rate. Low-discount philanthropists may consume only a tiny fraction of their wealth each year, which means they’re not concerned about over-spending their liquid investments.&lt;/p&gt;

&lt;p&gt;In a two-asset model with public equity and (&lt;a href=&quot;#appendix-c-how-i-calculated-optimal-private-equity-allocation&quot;&gt;synthetic&lt;/a&gt;) private equity, low-discount investors allocate 1 to 5 percentage points more to private equity than high-discount investors:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/pe-absolute-allocation-0pct.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/pe-relative-allocation-0pct.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(“Absolute” is the allocation as a percentage of wealth/liquidation value; “relative” is the allocation as a percentage of public + private allocation. Public + private absolute allocation can exceed 100% due to leverage.)&lt;/p&gt;

&lt;p&gt;See &lt;a href=&quot;#appendix-c-how-i-calculated-optimal-private-equity-allocation&quot;&gt;Appendix C&lt;/a&gt; for the details on methodology.&lt;/p&gt;

&lt;h3 id=&quot;observed-illiquidity-premium&quot;&gt;Observed illiquidity premium&lt;/h3&gt;

&lt;p&gt;How large a premium can investors earn by holding private equity instead of public stocks?&lt;/p&gt;

&lt;p&gt;Historically, private equity in the United States earned a 3% premium (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1932316&quot;&gt;Harris et al., 2014&lt;/a&gt;&lt;sup id=&quot;fnref:34&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:34&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2627312&quot;&gt;Kaplan &amp;amp; Sensoy, 2015&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;). However, this premium is not explained by illiquidity. The private equity premium is fully subsumed by a leveraged small value portfolio in public equities (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639647&quot;&gt;Chingono &amp;amp; Rasmussen, 2015&lt;/a&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://www.aqr.com/Insights/Research/White-Papers/Demystifying-Illiquid-Assets-Expected-Returns-for-Private-Equity&quot;&gt;Ilmanen et al., 2019&lt;/a&gt;&lt;sup id=&quot;fnref:33&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:33&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2720479&quot;&gt;Stafford, 2017&lt;/a&gt;&lt;sup id=&quot;fnref:32&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:32&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt;). In fact, private equity has (large but statistically insignificant) &lt;em&gt;negative&lt;/em&gt; alpha compared to leveraged small value.&lt;/p&gt;

&lt;p&gt;Given zero (or negative) alpha for private equity, investors do not want to hold it, regardless of discount rate.&lt;/p&gt;

&lt;h3 id=&quot;reconciling-predicted-and-observed-illiquidity-premiums&quot;&gt;Reconciling predicted and observed illiquidity premiums&lt;/h3&gt;

&lt;p&gt;If theory predicts an illiquidity premium, why don’t we see one in historical data?&lt;/p&gt;

&lt;p&gt;The most common explanation (e.g., as given in &lt;a href=&quot;https://www.aqr.com/Insights/Research/White-Papers/Demystifying-Illiquid-Assets-Expected-Returns-for-Private-Equity&quot;&gt;Ilmanen et al., 2019&lt;/a&gt;&lt;sup id=&quot;fnref:33:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:33&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639647&quot;&gt;Chingono &amp;amp; Rasmussen, 2015&lt;/a&gt;&lt;sup id=&quot;fnref:4:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;) is that investors are willing to pay a premium for the return smoothing provided by private equity. Unlike a public market where the price updates many times per second, private equity firms have leeway to report essentially whatever valuations they want. They tend to under-report drawdowns, making returns for private companies look more consistent than their public counterparts, but with no corresponding decrease in true underlying risk (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2379170&quot;&gt;Welch &amp;amp; Stubben, 2018&lt;/a&gt;&lt;sup id=&quot;fnref:36&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;). (In 2022 when the S&amp;amp;P 500 experienced a 20+% drawdown, the average private equity firm reported a 0% change in valuation, which is &lt;a href=&quot;https://www.aqr.com/Insights/Perspectives/Introducing-the-New-AQR-SMOOTH-Fund&quot;&gt;absurd&lt;/a&gt; on its face, but they’re allowed to do it.)&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/choffstein-tweet.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Investors’ preference for return smoothing is, in some sense, irrational—it doesn’t provide any financial benefit. But if you know that seeing your portfolio lose value will give you anguish, and might cause you to make a bad decision, then it could be rational to tie yourself to the mast by buying private equity.&lt;/p&gt;

&lt;p&gt;Evidence suggests that, at least in the current market environment, rational investors should not invest in private equity, regardless of discount rate. But human investors with typical human flaws may do better by investing in private equity, even if it’s not technically optimal. And low-discount investors should allocate on the order of 5 percentage points more to private equity than high-discount investors do. Sophisticated institutional investors tend to allocate 10% to 15% to private equity,&lt;sup id=&quot;fnref:40&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:40&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt; which would suggest a 15% to 20% allocation for philanthropists with low discount rates. This is a small enough difference that it might not matter in practice, but it’s still a much bigger difference than we found for long-term mean reversion or parameter uncertainty.&lt;/p&gt;

&lt;p&gt;I personally do not invest in private equity because I’m not concerned about the behavioral issues (and because it’s hard to invest in, but that’s secondary).&lt;/p&gt;

&lt;h2 id=&quot;a-note-on-conventional-wisdom&quot;&gt;A note on conventional wisdom&lt;/h2&gt;

&lt;p&gt;Conventional wisdom says that investors with long time horizons should take on more risk. Why? Does that apply to low-discount philanthropists?&lt;/p&gt;

&lt;p&gt;The conventional wisdom is correct for most people, but not because long-term investors &lt;em&gt;inherently&lt;/em&gt; want more risk. Rather, long-term investors (usually) expect to earn more income in the future, and they should count that income as part of their investment portfolio. After factoring in their future income, they should seek the same amount of risk as a short-term investor.&lt;/p&gt;

&lt;p&gt;For example, say Alice is retired with $1 million, and she invests her retirement fund in 50% stocks, 50% bonds. Bob has $100,000 in savings and expects to earn another $900,000 before he retires. He invests 100% of his savings into stocks. Bob’s portfolio looks riskier than Alice, but it’s actually less risky: Alice has 50% of her portfolio in stocks, while Bob effectively only has 10% after accounting for his future income. In theory, If Bob used 5:1 leverage, he would get his effective stock allocation up to 50%, although that’s probably not feasible in practice.&lt;/p&gt;

&lt;p&gt;For low-discount philanthropists, how much risk to take depends on what money they expect to earn in the future. That depends on their personal income plus the income of all likeminded philanthropists plus the wealth and income of any people in the future who will become likeminded. Figuring out all those details is beyond the scope of this essay. The bottom line is: yes, the conventional wisdom is correct in a sense, but it doesn’t contradict my earlier claim that an investor’s risk appetite does not depend on time horizon, and it doesn’t have anything to do with discount rates.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;In summary:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;According to standard theory, investors should not change their asset allocation based on their discount rate.&lt;/li&gt;
  &lt;li&gt;Standard theory assumes that time periods are independent and that all investments are highly liquid, both of which are (sometimes) false.&lt;/li&gt;
  &lt;li&gt;Investment returns are not entirely independent over time. This has a nonzero but negligible effect on high- vs. low-discount investors.&lt;/li&gt;
  &lt;li&gt;It looks like illiquid assets don’t earn a significant premium, so low-discount investors shouldn’t prefer them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Therefore, low-discount philanthropists don’t need to allocate any differently than high-discount investors.&lt;/p&gt;

&lt;h1 id=&quot;appendix-a-pedantry&quot;&gt;Appendix A: Pedantry&lt;/h1&gt;

&lt;p&gt;In this essay, I somewhat conflated low-discount philanthropists with long-term philanthropists, but they’re not necessarily the same thing. For our purposes, “long-term” means you have a (relatively) long time horizon. A 3% discount rate would be considered high, but a philanthropist with a 3% discount rate could still easily have a 50+ year time horizon.&lt;/p&gt;

&lt;p&gt;A low discount rate is not the same thing as longtermism (at least the way I use the term). A longtermist might have a high discount rate, for instance if they believe extinction is likely to render their funds useless in the near future, or if they expect not to be able to preserve the intent of their endowment across multiple generations. And someone with a 0% discount rate doesn’t have to be a longtermist—e.g., some form of person-affecting view could preclude longtermism even with no time discounting.&lt;/p&gt;

&lt;p&gt;Philanthropists with particularly pessimistic beliefs about existential risk might have &lt;em&gt;higher&lt;/em&gt; discount rates than normal people, which would suggest doing the opposite of what low-discount philanthropists do (which, if I’m right, is nothing, so it doesn’t matter either way).&lt;/p&gt;

&lt;p&gt;Not all low-discount investors are low-discount philanthropists, but I use the terms interchangeably because I only care about investors who are also philanthropists.&lt;/p&gt;

&lt;h1 id=&quot;appendix-b-how-i-calculated-historical-optimal-leverage&quot;&gt;Appendix B: How I calculated historical optimal leverage&lt;/h1&gt;

&lt;p&gt;I used data from &lt;a href=&quot;https://academic.oup.com/qje/article/134/3/1225/5435538&quot;&gt;The Rate of Return on Everything, 1870–2015 (RORE)&lt;/a&gt;&lt;sup id=&quot;fnref:8:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; and the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt;, focusing on four data series:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;US equities 1926 to 2021, from Ken French&lt;/li&gt;
  &lt;li&gt;Developed ex-US equities 1990 to 2021, from Ken French&lt;/li&gt;
  &lt;li&gt;UK equities 1899 to 2015, from RORE&lt;/li&gt;
  &lt;li&gt;US equities 1872 to 2015, from RORE&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(In the main essay, I did not discuss #1 because it’s mostly a subset of #4. I only looked at both as a sanity check.)&lt;/p&gt;

&lt;p&gt;I implemented the standard &lt;a href=&quot;https://plato.stanford.edu/entries/ramsey-economics/&quot;&gt;Ramsey model&lt;/a&gt; of intergenerational welfare where we have a pool of wealth that we invest at some rate of return, and every year we consume some fixed percentage of our wealth. We have a utility function over consumption, and total utility is the sum of discounted utility of consumption for each period. Then we simultaneously solve for the optimal amount of portfolio leverage and optimal rate of consumption. (For this essay, we don’t care about consumption, but we still have to solve for both variables.)&lt;/p&gt;

&lt;p&gt;Applying this model over historical data requires making some modifications. We can’t calculate total utility out to infinity because we only have a finite sample of real historical financial returns. Naively, we could simply maximize utility over the sample we have, but this has two major problems:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;We don’t discount the last year all that much relative to the first year. Any money left over at the end has a high opportunity cost. The optimizer wants to end up with as close to $0 as possible, so it recommends consuming at a faster rate than we’d want to if we had an infinite time horizon.&lt;/li&gt;
  &lt;li&gt;The outcome strongly depends on what happens to our investments near the beginning of the sample because the beginning is discounted the least.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I implemented (somewhat ad-hoc) solutions for both of these problems:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Loop through the historical sample multiple times, extending it until it repeats for 1000 years.&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt; The 1000th year has minimal impact on total utility, so the optimal solution over 1000 years looks almost exactly like the optimal solution over infinity years.&lt;/li&gt;
  &lt;li&gt;Instead of only starting at the first year in the sample, calculate total utility for every possible starting year and then take the average.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;tables-of-results&quot;&gt;Tables of results&lt;/h2&gt;

&lt;p&gt;The tables below give optimal leverage and consumption for various rates of relative risk aversion and discount rates, given a fixed-point expected return (no parameter uncertainty). I included 1% and 3% discount rates except for at 0.75 RRA, in which case the solution did not always exist, so I included higher discount rates instead. For RRA &amp;gt; 1, I also included a 0% discount rate.&lt;/p&gt;

&lt;p&gt;For Ken French data (Tables 1 and 2), I applied discounting and consumption on a monthly basis and estimated the cost of leverage at the risk-free rate as given in the data.&lt;/p&gt;

&lt;p&gt;For RORE data (Table 3), I applied discounting and consumption on an annual basis and estimated the cost of leverage at 0% because I don’t have data on the risk-free rate going back far enough. This is wrong, but it doesn’t matter for our purposes.&lt;/p&gt;

&lt;p&gt;Table 1. US equities, 1926 to 2021.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;RRA&lt;/th&gt;
      &lt;th&gt;Discount&lt;/th&gt;
      &lt;th&gt;Leverage&lt;/th&gt;
      &lt;th&gt;Consumption&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;0.75&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;DNE&lt;/td&gt;
      &lt;td&gt;DNE&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.75&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;2.20&lt;/td&gt;
      &lt;td&gt;0.43%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.75&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
      &lt;td&gt;2.36&lt;/td&gt;
      &lt;td&gt;2.65%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;2.16&lt;/td&gt;
      &lt;td&gt;1.00%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;2.16&lt;/td&gt;
      &lt;td&gt;3.02%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
      &lt;td&gt;1.47&lt;/td&gt;
      &lt;td&gt;5.56%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;1.45&lt;/td&gt;
      &lt;td&gt;6.00%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;1.40&lt;/td&gt;
      &lt;td&gt;6.89%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;em&gt;Geometric mean is maximized at 1.90 leverage.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Table 2. Developed ex-US equities, 1990 to 2021.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;RRA&lt;/th&gt;
      &lt;th&gt;Discount&lt;/th&gt;
      &lt;th&gt;Leverage&lt;/th&gt;
      &lt;th&gt;Consumption&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;0.75&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;DNE&lt;/td&gt;
      &lt;td&gt;DNE&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.75&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;1.57&lt;/td&gt;
      &lt;td&gt;2.09%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.75&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
      &lt;td&gt;1.58&lt;/td&gt;
      &lt;td&gt;4.84%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;1.56&lt;/td&gt;
      &lt;td&gt;1.05%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;1.55&lt;/td&gt;
      &lt;td&gt;3.03%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
      &lt;td&gt;1.48&lt;/td&gt;
      &lt;td&gt;2.98%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;1.45&lt;/td&gt;
      &lt;td&gt;3.43%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;1.42&lt;/td&gt;
      &lt;td&gt;4.44%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;em&gt;Geometric mean is maximized at 1.17 leverage.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Table 3. UK equities, 1871 to 2015.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;RRA&lt;/th&gt;
      &lt;th&gt;Discount&lt;/th&gt;
      &lt;th&gt;Leverage&lt;/th&gt;
      &lt;th&gt;Consumption&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;1.92&lt;/td&gt;
      &lt;td&gt;1.92%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
      &lt;td&gt;1.88&lt;/td&gt;
      &lt;td&gt;4.10%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;1.86&lt;/td&gt;
      &lt;td&gt;1.00%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;1.86&lt;/td&gt;
      &lt;td&gt;2.98%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
      &lt;td&gt;1.79&lt;/td&gt;
      &lt;td&gt;5.36%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;1.78&lt;/td&gt;
      &lt;td&gt;5.69%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;1.76&lt;/td&gt;
      &lt;td&gt;6.54%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;A portfolio that combines the US market beta + value + momentum factors shows the same pattern (with RRA &amp;lt; 1, optimal leverage increases with discount rate; with RRA &amp;gt; 1, optimal leverage decreases with discount rate). I calculated this portfolio using Ken French data as the simple sum of ‘Mkt’ + ‘HML’ + ‘Mom’ (UMD).&lt;/p&gt;

&lt;p&gt;Table 4. US market beta + value + momentum factors, 1927 to 2021.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;RRA&lt;/th&gt;
      &lt;th&gt;Discount&lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Consumption&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;3.05&lt;/td&gt;
      &lt;td&gt;0.14%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
      &lt;td&gt;3.15&lt;/td&gt;
      &lt;td&gt;1.53%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;3.04&lt;/td&gt;
      &lt;td&gt;1.00%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;3.04&lt;/td&gt;
      &lt;td&gt;3.04%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
      &lt;td&gt;1.57&lt;/td&gt;
      &lt;td&gt;9.59%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;1.56&lt;/td&gt;
      &lt;td&gt;9.92%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;1.55&lt;/td&gt;
      &lt;td&gt;10.82%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;This chart shows excess optimal leverage (relative to a baseline 3% discount rate) for a US market + value + momentum factor portfolio with RRA &amp;gt; 1:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/excess-uncertainty-adjusted-leverage-factors.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This suggests that, for a factor investor, discount rate matters somewhat less than for a passive investor.&lt;/p&gt;

&lt;h1 id=&quot;appendix-c-how-i-calculated-optimal-private-equity-allocation&quot;&gt;Appendix C: How I calculated optimal private equity allocation&lt;/h1&gt;

&lt;p&gt;There is some pre-existing literature on the theoretical illiquidity premium, such as Ang (2014).&lt;sup id=&quot;fnref:35&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:35&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt; However, I could not find any research on how the illiquidity premium varies as a function of discount rate, so I did my own analysis.&lt;/p&gt;

&lt;p&gt;I calculated the optimal allocation across two assets labeled “public equity” and “private equity”. I used RORE&lt;sup id=&quot;fnref:8:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; data on the US stock market back to 1872 as the public equity component. For private equity, we don’t have data going back very far, so as a proxy, I used RORE data on UK public equities. Private equity behaves mostly like private equity, but not exactly; similarly, UK equity behaves mostly like US equity, but not exactly, so for this test, I took historical returns for UK public equity and pretended they represented private equity. Then I multiplied the UK annual returns by 1.5 to account for the fact that private equity tends to be more volatile.&lt;/p&gt;

&lt;p&gt;The resulting “private equity” return series has an 11.6% annualized return, compared to 8.9% for US public equity; this matches the observed 3% private equity premium.&lt;/p&gt;

&lt;p&gt;In my simulation, investors can only allocate to private equity once every 10 years. Any money they invest stays locked up, and they can rebalance at the end of every 10-year period. Investors can use leverage on public equity but not on private equity.&lt;sup id=&quot;fnref:39&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:39&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This produced the following relative allocations to private equity at various discount rates and levels of risk aversion:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/pe-relative-allocation-0pct.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;“Private equity” had +6.6% &lt;a href=&quot;https://www.investopedia.com/terms/a/alpha.asp&quot;&gt;alpha&lt;/a&gt; over public equity. If we subtract 6.6% from private equity returns to give it zero alpha, the optimal allocation becomes 0% regardless of risk aversion or discount rate.&lt;/p&gt;

&lt;p&gt;If we add a pure +2% premium to private equity, this increases the optimal allocation, and also increases the differential between low- and high-discount investors, with low-discount investors now allocating 2 to 9 percentage points more to private equity than high-discount investors:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/pe-relative-allocation-2pct.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;If we make the lockup period longer than 10 years, it reduces the optimal allocation for all investors, and it increases the differential between low- and high-discount investors. The opposite happens if we shorten the lockup period.&lt;/p&gt;

&lt;p&gt;In some cases, there is no optimal allocation because it’s always better to delay spending indefinitely. This condition holds at (RRA = 1, discount rate = 0%) and at RRA &amp;lt; 1 for sufficiently low discount rates (including RRA = 0.8, discount rate = 3%).&lt;/p&gt;

&lt;p&gt;(Originally, I wanted to set the average return of the “private equity” asset such that the high-discount-optimal allocation reflects the real-world allocation to private equity, and then calculate the low-discount-optimal allocation given the same inputs. But I couldn’t find good data on the total market cap of private equity, so I just used inputs that seemed reasonable.&lt;sup id=&quot;fnref:31&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:31&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;h1 id=&quot;appendix-d-market-volatility-over-longer-holding-periods&quot;&gt;Appendix D: Market volatility over longer holding periods&lt;/h1&gt;

&lt;p&gt;Stock markets exhibit long-term mean reversion. As shown in Appendix B, this encourages risk-averse investors with lower discount rates to use more leverage. I calculated this using some complicated&lt;sup id=&quot;fnref:41&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:41&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;26&lt;/a&gt;&lt;/sup&gt; and hard-to-intuit math. For a more intuitive approach, we could look at market volatility over different holding periods.&lt;/p&gt;

&lt;p&gt;Jeremy Siegel’s &lt;em&gt;Stocks for the Long Run&lt;/em&gt;&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt; looked at the US equity market from 1802 to 2012 and found that volatility decreases over longer holding periods. I extended these results to global equities using data from &lt;a href=&quot;https://academic.oup.com/qje/article/134/3/1225/5435538&quot;&gt;The Rate of Return on Everything, 1870–2015&lt;/a&gt;&lt;sup id=&quot;fnref:8:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;. I looked at the annualized standard deviation&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;28&lt;/a&gt;&lt;/sup&gt; of stock markets in 8 different countries&lt;sup id=&quot;fnref:43&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:43&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;29&lt;/a&gt;&lt;/sup&gt; for holding periods ranging from 1 to 30 years.&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;30&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/annualized-stdev-by-period-equities.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;(30 years is more or less the longest holding period we can say anything about because there are only four non-overlapping 30-year periods in the sample.)&lt;/p&gt;

&lt;p&gt;This chart shows a weak downward trend. On average, for every year that you increase the holding period, the annualized standard deviation decreases by 0.14% (p &amp;lt; 0.002, but you shouldn’t take this p-value literally&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;31&lt;/a&gt;&lt;/sup&gt;), although this number is pretty sensitive to how far you extend the regression—if you exclude the particularly-noisy 21 to 30 year holding period range, the slope reduces to 0.004% with p = 0.94. And we can’t tell if the trend extends beyond 30 years, but it doesn’t look like it does.&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;32&lt;/a&gt;&lt;/sup&gt; So stocks are somewhere between 0 and 4 percentage points less volatile over 30 years than over 1 year.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(Siegel (2014)&lt;sup id=&quot;fnref:12:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt; found a larger and more unequivocal mean reversion effect. That’s mostly because it only covered the US market, and you will notice that in the table above, the United States shows stronger mean reversion than most countries.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At the same time, stock markets’ arithmetic means decrease over longer holding periods:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/arithmetic-return-by-period-equities.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Long-term mean reversion works in favor of risk-averse investors (RRA &amp;gt; 1) and against risk-tolerant investors (RRA &amp;lt; 1).&lt;/p&gt;

&lt;p&gt;Bonds get &lt;em&gt;more&lt;/em&gt; volatile as you increase the holding period, from an average of 7% annualized volatility up to 12%:&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;33&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/annualized-stdev-by-period-bonds.png&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;h1 id=&quot;appendix-e-illiquidity-in-real-estate&quot;&gt;Appendix E: Illiquidity in real estate&lt;/h1&gt;

&lt;p&gt;Real estate is less liquid than stocks, but not in a way that benefits low-discount philanthropists. Real estate costs money to trade, but it doesn’t have lockup periods like private equity, so we shouldn’t expect to see an illiquidity premium. There might be a “trading cost premium”, which benefits investors with long holding periods, but real estate investors tend to have long holding periods anyway, so this doesn’t make real estate look particularly good from a low-discount perspective.&lt;/p&gt;

&lt;p&gt;The main bottleneck to real estate investing is not liquidity but diversification. To properly diversify your real estate portfolio, you’d need to spend tens of millions of dollars buying dozens of properties. If you do have that much money, then investing in real estate is probably smart. Indeed, many wealthy, long-term-oriented investors such as university endowments own a lot of land.&lt;/p&gt;

&lt;h2 id=&quot;illiquidity-premium-in-100-year-leases&quot;&gt;Illiquidity premium in 100-year leases&lt;/h2&gt;

&lt;p&gt;Could low-discount philanthropists do well by buying much longer-term illiquid investments, such as 100-year property leases? Phil Trammell &lt;a href=&quot;https://forum.effectivealtruism.org/posts/amdReARfSvgf5PpKK/phil-trammell-philanthropy-timing-and-the-hinge-of-history&quot;&gt;proposed&lt;/a&gt; doing this. He claims that if you buy a house and simultaneously sell a 100-year lease, you gain the right to use the land 100 years from now and you only have to pay 10% of its present value. (I couldn’t find good data on the cost of a 100-year lease, so I’ll take his word for it.)&lt;/p&gt;

&lt;p&gt;A 10% price implies a 2.3% interest rate (\(1.023^{100} = 10\)), which sounds basically fair—maybe even too low. As an illustration, suppose you have $1 million and you’re deciding between two choices:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Buy 10 houses for a total of $10 million and simultaneously sell a 100-year lease on each of them. You spend $1 million on net, and you get 10 houses 100 years from now.&lt;/li&gt;
  &lt;li&gt;Buy one house for $1 million and rent it out. Suppose the annual rent is 2.3% of the value of the house. Then in 100 years, you use the earned rent to buy the other nine houses. You spend $1 million, and you get 10 houses 100 years from now.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These two scenarios cost the same and provide the same value. And 2.3% is on the low end of rental yields; if you can earn a higher yield than that, it would make more sense to simply buy a house rather than buy 10 houses and sell 100-year leases on them.&lt;/p&gt;

&lt;p&gt;(Why doesn’t the market set a higher implied interest rate on 100-year leases? Probably because houses are tradeable(ish): your money isn’t truly locked up for 100 years because you can sell the house at any time. So in an efficient market, the implied interest rate should equal the expected market yield (plus some other factors like differential taxes, maintenance costs, cost of selling the house multiplied by probability of wanting to sell, etc.)&lt;/p&gt;

&lt;h1 id=&quot;appendix-f-transformative-changes-to-market-behavior&quot;&gt;Appendix F: Transformative changes to market behavior&lt;/h1&gt;

&lt;p&gt;Over sufficiently long time horizons, markets might see significant changes to investment performance, such as:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Technological growth compounds on itself (possibly due to transformative AI), causing the rate of exponential growth to accelerate, greatly increasing returns.&lt;/li&gt;
  &lt;li&gt;We reach the physical limits of economic growth, and investments stop growing exponentially.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There might be interesting things to say about these or other possible futures that break the current paradigm of consistent exponential growth. But that’s much more speculative, and I consider it out of scope for this essay.&lt;/p&gt;

&lt;p&gt;If you expect the economy to radically change in (say) 1000 years, but not to change much before then, then that doesn’t significantly affect how to invest &lt;em&gt;today&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If you expect (with non-trivial probability) major changes in the near future (such as due to transformative AI), that does change how you should invest, but I’m not sure how,&lt;sup id=&quot;fnref:42&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:42&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;34&lt;/a&gt;&lt;/sup&gt; and that’s a question for another time.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Robert Merton (1969). &lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Lifetime Portfolio Selection Under Uncertainty: The Continuous-Time_case.&lt;/a&gt; &lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: Multiple full read-throughs. This is a seminal paper so I’m highly confident that it’s good. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:37&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Not all of the results I describe are original to Merton, but he provided the first single paper that proves all of them. &lt;a href=&quot;#fnref:37&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:29&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For our purposes, “consumption” refers to anything we do with money other than re-investing it. For example, if we donate money, that counts as a form of consumption. &lt;a href=&quot;#fnref:29&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The exact solution for the optimal consumption rate is given in formula (42) of &lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Merton (1969)&lt;/a&gt;. All else equal, the rate of consumption increases linearly with the discount rate. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If you become more risk-averse as you get wealthier, then you should consume a larger proportion of wealth when you’re wealthier, and vice versa. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Paul McCaffrey (2022). &lt;a href=&quot;https://blogs.cfainstitute.org/investor/2022/07/29/equity-risk-premium-forum-term-structure-mean-reversion-and-cape-reconsidered/&quot;&gt;Equity Risk Premium Forum: Term Structure, Mean Reversion, and CAPE Reconsidered.&lt;/a&gt; Partial transcript of a discussion between Laurence B. Siegel, Rob Arnott, Elroy Dimson, William N. Goetzmann, Roger G. Ibbotson, Antti Ilmanen, Martin Leibowitz, Rajnish Mehra, and Jeremy Siegel.&lt;/p&gt;

      &lt;p&gt;Some relevant quotes:&lt;/p&gt;

      &lt;p&gt;Goetzmann: “The evidence is always a bit marginal and depends on your assumptions and on where you get the data.”&lt;/p&gt;

      &lt;p&gt;Ilmanen: “I see evidence of mean reversion over time horizons from 3 years up to 15 years. […] The evidence is really fuzzy, and usable or actionable evidence is almost zilch because of all this horizon uncertainty.” &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;One thing I learned from writing this essay: I find it easy to think about two-dimensional relationships (“how does Y change as a function of X?”), but far harder to think about three-dimensional relationships (“how does ‘how does Z change as a function of Y’ change as a function of X?”). I kept finding myself getting the variables mixed up. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Òscar Jordà, Katharina Knoll, Dmitry Kuvshinov, Moritz Schularick &amp;amp; Alan M Taylor (2019). &lt;a href=&quot;https://academic.oup.com/qje/article/134/3/1225/5435538&quot;&gt;The Rate of Return on Everything, 1870–2015.&lt;/a&gt; &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:8:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:8:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:8:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:30&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;To be precise, if prices follow a log-normal distribution, then a solution exists only when the following condition holds, where \(\delta\) is the discount rate, \(\gamma\) is the rate of relative risk aversion, \(\mu\) is the expected (geometric) portfolio return minus the risk-free rate, and \(\sigma\) is the portfolio standard deviation:&lt;/p&gt;

\[\frac{-\delta}{\gamma} + (\mu + \sigma^2/2)\frac{1-\gamma}{\gamma} - (1 - \gamma)\sigma^2/2 &amp;lt; 0\]

      &lt;p&gt;(Formula taken from Levhari &amp;amp; Srinivasan (1969). “Optimal Savings Under Uncertainty.”)&lt;/p&gt;

      &lt;p&gt;This graph shows the minimum solvable discount rate as a function of RRA for \(\mu = 7\%, \sigma=18\%\):&lt;/p&gt;

      &lt;p&gt;&lt;img src=&quot;/assets/images/min-solvable-discount-rate.png&quot; alt=&quot;&quot; /&gt; &lt;a href=&quot;#fnref:30&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;These specific numbers come from US equities with risk aversion coefficient = 2. &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lubos Pastor &amp;amp; Robert F. Stambaugh (2012). &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1540-6261.2012.01722.x&quot;&gt;Are Stocks Really Less Volatile in the Long Run?&lt;/a&gt; &lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: I read the relevant parts. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;img src=&quot;/assets/images/excess-leverage-due-to-uncertainty-uk.png&quot; alt=&quot;&quot; /&gt; &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:28&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Pastor &amp;amp; Stambaugh (2012) found that the parameter uncertainty effect outweighs the mean reversion effect, while I (usually) found the opposite. The difference is mostly because:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Pastor &amp;amp; Stambaugh looked at subjective portfolio volatility, while I looked at optimal leverage to maximize utility of consumption. These two things are related, but not the same.&lt;/li&gt;
        &lt;li&gt;Pastor &amp;amp; Stambaugh assumed no discount rate, while I looked at discount rates ranging from 0% to 3%. Subjective uncertainty can outweigh mean reversion at low discount rates. (Of the two data points I found where subjective uncertainty outweighed mean reversion, one occurred at discount=0% and one at discount=1%.)&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:28&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:34&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Robert Harris, Time Jenkinson, Steven Kaplan (2014). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1932316&quot;&gt;Private Equity Perfomance: What Do We Know?&lt;/a&gt; &lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: Abstract only. &lt;a href=&quot;#fnref:34&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Steven Kaplan &amp;amp; Berk Sensoy (2015). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2627312&quot;&gt;Private Equity Performance: A Survey.&lt;/a&gt; &lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: Abstract only. I’m citing two papers with different methodologies but the same result, which increases my confidence without having to analyze the papers. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Brian Chingono &amp;amp; Daniel Rasmussen (2015). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639647&quot;&gt;Leveraged Small Value Equities.&lt;/a&gt; &lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: I read the relevant parts.&lt;/p&gt;

      &lt;p&gt;See Figure 9 in &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639647&quot;&gt;Chingono &amp;amp; Rasmussen (2015)&lt;/a&gt;, with explanation on the previous page. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:4:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:33&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Antti Ilmanen, Swati Chandra &amp;amp; Nicholas McQuinn (2019). &lt;a href=&quot;https://www.aqr.com/Insights/Research/White-Papers/Demystifying-Illiquid-Assets-Expected-Returns-for-Private-Equity&quot;&gt;Demystifying Illiquid Assets: Expected Returns for Private Equity.&lt;/a&gt; &lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: Full read-through. &lt;a href=&quot;#fnref:33&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:33:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:32&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Erik Stafford (2017). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2720479&quot;&gt;Replicating Private Equity with Value Investing, Homemade Leverage, and Hold-to-Maturity Accounting.&lt;/a&gt; &lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: Abstract only. &lt;a href=&quot;#fnref:32&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:36&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kyle Welch &amp;amp; Stephen Stubben (2018). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2379170&quot;&gt;Private Equity’s Diversification Illusion: Evidence From Fair Value Accounting.&lt;/a&gt; &lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: Abstract only, plus the explanation of this paper as cited in Ilmanen et al. (2019).&lt;sup id=&quot;fnref:33:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:33&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; &lt;a href=&quot;#fnref:36&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:40&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Source: I think I read this somewhere. &lt;a href=&quot;#fnref:40&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Empirically, this effect matters a lot. I tested the utility-maximizing parameters when starting at various start years and the results different substantially. I needed to average across 30+ start years to get a more stable result. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, I extended it to last 1000 years for RORE data and 10,000 months for Ken French data. 10,000 months isn’t exactly 1000 years, but it’s close enough. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:35&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Andrew Ang (2014). Asset Management: A Systematic Approach to Factor Investing. Oxford University Press. &lt;a href=&quot;#fnref:35&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:39&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;You can leverage public securities by buying on margin or buying derivatives, but you usually can’t borrow money to invest in private equity funds. &lt;a href=&quot;#fnref:39&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:31&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In 2018, private equity firms &lt;a href=&quot;https://www.mordorintelligence.com/industry-reports/global-private-equity-market&quot;&gt;spent $825 billion&lt;/a&gt;. In 2020, the global stock market was &lt;a href=&quot;https://www.visualcapitalist.com/all-of-the-worlds-money-and-markets-in-one-visualization-2020/&quot;&gt;worth $89 trillion&lt;/a&gt;. If private equity firms tend to last 10 years, that means private equity is about 10% as big as public equity. So the efficient-market allocation for a normal investor would be ~90% to public equity, ~10% to private equity (ignoring all other asset classes). &lt;a href=&quot;#fnref:31&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:41&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The math required isn’t that complicated, it’s just arithmetic. The complicated part is that the equations have over 140 variables (one for each year). &lt;a href=&quot;#fnref:41&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jeremy Siegel (2014). Stocks for the Long Run. 5th Edition. Page 97, Figure 6-2. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:12:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;When returns are independent across time, standard deviation of annual returns decreases with the square root of the length of the holding period. To adjust for this, I “annualized” the computed standard deviation by multiplying by the square root of the holding period length. &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:43&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The data set includes 15 countries, but I only included countries that have continuous data from at least 1872 to 2015. Some countries’ time series don’t start until later, and a few have missing years in the middle, for example Japan’s stock market closed for two years after WWII. Excluding such countries biases the average return upward, but I’m not looking at the expected return, so I’m not too concerned about bias. &lt;a href=&quot;#fnref:43&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Chart shows nominal returns. Adjusting for inflation does not significantly change the results. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The significance test assumes every sample is independent, but in reality, the (say) 20-year holding period sample is almost identical to the 21-year holding period sample. So a “statistically significant” result might still be noise. A more sophisticated significance test could probably correct for this, but I don’t know how to do that. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;It makes more sense to model long-term mean reversion as a power function of holding period rather than a linear function. Most of the mean reversion happens in the first few years, and it quickly levels off. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Siegel (2014)&lt;sup id=&quot;fnref:12:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt; also found that US bonds exhibit long-term momentum. &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:42&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I’ve spent on the order of 10 hours thinking about this question, and it’s still very unclear to me. &lt;a href=&quot;#fnref:42&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Two Types of Scientific Misconceptions You Can Easily Disprove</title>
				<pubDate>Wed, 07 Sep 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/09/07/two_types_of_scientific_misconceptions/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/09/07/two_types_of_scientific_misconceptions/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Last updated 2023-03-22 to add another example.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are two opposite types of scientific claims that are easy to prove wrong: claims that never could have been proven in the first place, and claims that directly contradict your perception.&lt;/p&gt;

&lt;h2 id=&quot;type-1-the-unknowable-misconception&quot;&gt;Type 1: The unknowable misconception&lt;/h2&gt;

&lt;p&gt;A heuristic for identifying scientific misconceptions: If this were true, how would we know?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; “People swallow 8 spiders per year during sleep.” If you were a scientist and you wanted to know how often people swallow spiders, how would you figure it out? You’d have to do a sleep study for thousands of person-nights where you film people sleeping using a night vision camera that’s so high-quality that it can pick up something as small as a spider (which, as far as I know, doesn’t exist) and then pore over the tens of thousands of hours of footage by hand to look for spiders (because this factoid originated in a time when computer software wasn’t sophisticated enough to do it for you) and track the locations of the spiders and count how often they crawl into people’s mouths without coming back out. This is all theoretically possible but it would be insanely expensive and who would be crazy enough to do it?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; “The average man thinks about sex once every 7 seconds.” People can’t even introspect on their own thoughts on a continuous basis, how would scientists do it? This one seems simply impossible to prove, regardless of how big your budget is or how crazy you are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; “Only 7% of communication is verbal, and 93% is nonverbal.” What does that even mean? How would a scientific study quantify all the information that two people transmit during a conversation and measure its informational complexity, and then conclude that 93% is nonverbal? You can kind of measure the information in words by compressing the text, but there’s no known way to accurately measure the information in nonverbal communication.&lt;/p&gt;

&lt;p&gt;(This factoid does come from an actual study, but what the study actually showed&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; was that, among a sample of 37 university psychology students doing two different simple communication tasks, when verbal and nonverbal cues conflicted, they preferred the nonverbal cue 93% of the time.)&lt;/p&gt;

&lt;h2 id=&quot;type-2-the-perception-contradicting-misconception&quot;&gt;Type 2: The perception-contradicting misconception&lt;/h2&gt;

&lt;p&gt;You can disprove some common misconceptions using only your direct perception.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; “Certain parts of the tongue can only detect certain tastes.” You can easily disprove this by placing food on different spots on your tongue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; “You need two eyes to have depth perception.” Close one eye. Notice how you still have depth perception. Or look at a photograph, which was taken from a fixed perspective. Notice how you can still detect depth in the photograph.&lt;/p&gt;

&lt;p&gt;You need two points of reference to get “true” depth perception, but the human visual cortex can almost always infer depth based on how a scene looks. It’s possible to trick your depth perception, like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/waterfall-illusion.jpeg&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;

&lt;p&gt;But in almost all real-world situations, you can correctly perceive depth with only one eye.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; “You (only) have five senses.” You can prove that you have at least two more senses: balance (&lt;a href=&quot;https://en.wikipedia.org/wiki/Sense_of_balance&quot;&gt;equilibrioception&lt;/a&gt;) and the position of your limbs relative to each other (&lt;a href=&quot;https://en.wikipedia.org/wiki/Proprioception&quot;&gt;proprioception&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;You can prove you have a sense of balance by closing your eyes and walking without falling over. You’re not using any of the standard five senses, but you can still stay upright.&lt;/p&gt;

&lt;p&gt;You can prove you have proprioception by closing your eyes, flailing your arms around until they’re in random positions, and then bringing your hands together until your fingertips touch. You couldn’t do this if you didn’t have a proprioceptive sense.&lt;/p&gt;

&lt;p&gt;(Some scientists say we have more than seven senses, but the other ones are harder to prove.)&lt;/p&gt;

&lt;p&gt;So far I’ve given examples of factoids you can trivially disprove in five seconds. There are more misconceptions you can disprove if you’re willing to do a tiny bit of work. &lt;strong&gt;Example:&lt;/strong&gt; “Women have one more rib than men.” Find a friend of the opposite gender and count your ribs!&lt;/p&gt;


&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Philip Yaffe (2011). &lt;a href=&quot;https://ubiquity.acm.org/article.cfm?id=2043156&quot;&gt;The 7% Rule: Fact, Fiction, or Misunderstanding?&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Most Theories Can't Explain Why Game of Thrones Went Downhill</title>
				<pubDate>Mon, 05 Sep 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/09/05/cant_explain_game_of_thrones/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/09/05/cant_explain_game_of_thrones/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;I’ve heard people repeat a few theories for why Game of Thrones started so well and ended so badly. Most of these theories don’t make sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory 1: David &amp;amp; Dan are good at adapting books, but they didn’t know what to do when they ran out of book.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Seasons 1 through 4, which adapted the first three books of &lt;em&gt;A Song of Ice and Fire&lt;/em&gt;, are up there with the greatest television shows ever made. Season 5, which adapted the fourth book and part of the fifth book, was mediocre. If this theory were true, season 5 should have been on par with the earlier seasons, but it wasn’t.&lt;/p&gt;

&lt;p&gt;Furthermore, season 6 was better than season 5, even though season 5 was still based on the books, and season 6 wasn’t.&lt;/p&gt;

&lt;p&gt;Even more, David &amp;amp; Dan wrote some excellent original content in the earlier seasons, such as the extended arc with Arya and the Hound in season 4 (see &lt;a href=&quot;https://www.youtube.com/watch?v=EroNA0WdIww&quot;&gt;this scene&lt;/a&gt;, which wasn’t in the books).&lt;/p&gt;

&lt;p&gt;Some people say, well they know how to write short scenes, but they don’t know how to write story arcs. Then how do you explain the famously terrible dialogue in the later seasons?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory 2: David &amp;amp; Dan were always bad showrunners.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I hear this one a lot. There’s some evidence for this theory—prior to &lt;em&gt;Game of Thrones&lt;/em&gt;, David was best known for writing &lt;em&gt;X-Men Origins: Wolverine&lt;/em&gt; (a famously bad movie), and Dan had no prior writing credits. But if they’re bad showrunners, why were the first four seasons so good? I can buy that bad showrunners might accidentally create a pretty good show, but I don’t see how they could accidentally create one of the best shows of all time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory 3: David &amp;amp; Dan lost interest and started phoning it in.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This explanation makes more sense because it can explain the nearly-monotonic decline in quality. But it still can’t explain why season 6 was better than season 5. And the timing doesn’t entirely work out—people usually say this about seasons 7 and 8, but season 5 was clearly worse than the previous four seasons, and it seems less plausible that they’d lose interest that early on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Theory 4: Good writing emerges through a mysterious process that no one really understands.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is my favorite theory. Many occasionally-great writers can’t consistently replicate their success, writers can’t tell which of their works will become popular, and nobody fully understands what makes great writing. That’s why, for example, Jane Austen thought &lt;em&gt;Pride and Prejudice&lt;/em&gt; was her worst book, even though it’s what she’s most remembered for. Or why &lt;em&gt;The Matrix&lt;/em&gt; is my favorite movie of all time, even though I like &lt;del&gt;zero (0)&lt;/del&gt; one (1)&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; other Wachowski movie. (The Wachowskis are another example of artists who occasionally produce brilliant works and most of the time don’t, and it’s not clear why.)&lt;/p&gt;

&lt;p&gt;Or why people used to talk about good art coming from a muse—you didn’t write that brilliant story, you just wrote down the words that your muse gave you, which is just a poetic way of saying you have no idea how you came up with it.&lt;/p&gt;

&lt;p&gt;This is kind of a non-explanation: “the reason &lt;em&gt;Game of Thrones&lt;/em&gt; was inconsistently good is because lots of things are inconsistently good and we don’t know why.” But at least it turns a localized mystery into a much bigger mystery about the general nature of creativity.&lt;/p&gt;


&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Edit 2023-09-04: Originally I wrote zero, but I just remembered that the Wachowskis co-wrote &lt;em&gt;V for Vendetta&lt;/em&gt;, which I enjoyed. This is an irrelevant minor detail but I am committed to factual accuracy even when it doesn’t matter. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Philanthropists Probably Shouldn't Mission-Hedge AI Progress</title>
				<pubDate>Tue, 23 Aug 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/08/23/should_philanthropists_mission_hedge_ai_progress/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/08/23/should_philanthropists_mission_hedge_ai_progress/</guid>
                <description>
                  
                  
                  
                  &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;Confidence&lt;/a&gt;: Likely. &lt;a href=&quot;#credences&quot;&gt;[More]&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some people have asked, “should we invest in companies that are likely to do particularly well if transformative AI is developed sooner than expected?”&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;In a &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/&quot;&gt;previous essay&lt;/a&gt;, I developed a framework to evaluate mission-correlated investing. Today, I’m going to apply that framework to the cause of AI alignment.&lt;/p&gt;

&lt;p&gt;(I’m specifically looking at whether to &lt;em&gt;mission hedge&lt;/em&gt; AI, not whether to invest in AI in general. &lt;a href=&quot;#ai-companies-could-beat-the-market&quot;&gt;[More]&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Whether to mission hedge crucially depends on three questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;What is the shape of the utility function with respect to AI progress?&lt;/li&gt;
  &lt;li&gt;How volatile is AI progress?&lt;/li&gt;
  &lt;li&gt;What investment has the strongest correlation to AI progress, and how strong is that correlation?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I came up with these answers:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Utility function: No clear answer, but I primarily used a utility function with a linear relationship between AI progress and the marginal utility of money. I also looked at a different function where AI timelines determine how long our wealth gets to compound. &lt;a href=&quot;#what-is-the-shape-of-the-utility-function&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Volatility: I looked at three proxies for AI progress—industry revenue, ML benchmark performance, and AI timeline forecasts. These proxies suggest that the standard deviation of AI progress falls somewhere between 4% and 20%. &lt;a href=&quot;#how-volatile-is-ai-progress&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Correlation: A naively-constructed hedge portfolio would have a correlation of 0.3 at best. A bespoke hedge (such as an “AI progress swap”) would probably be too expensive. An intelligently-constructed portfolio might work better, but I don’t know how much better. &lt;a href=&quot;#how-good-is-the-best-hedge&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Across the range of assumptions I tested, mission hedging usually—but not always—looked worse on the margin&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; than investing in the mean-variance optimal portfolio with &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;leverage&lt;/a&gt;. Mission hedging looks better if the hedge asset is particularly volatile and has a particularly strong correlation to AI progress, and if we make conservative assumptions for the performance of the the mean-variance optimal portfolio. &lt;a href=&quot;#results&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The most obvious changes to my model argue against mission hedging. &lt;a href=&quot;#objections-to-this-methodology&quot;&gt;[More]&lt;/a&gt; But there’s room to argue in favor. &lt;a href=&quot;#whats-the-strongest-case-in-favor-of-mission-hedging&quot;&gt;[More]&lt;/a&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/JD6QvQG3q5p6heKuA/philanthropists-probably-shouldn-t-mission-hedge-ai-progress&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#methodology&quot; id=&quot;markdown-toc-methodology&quot;&gt;Methodology&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#why-care-about-mission-hedging-on-the-margin-rather-than-in-equilibrium&quot; id=&quot;markdown-toc-why-care-about-mission-hedging-on-the-margin-rather-than-in-equilibrium&quot;&gt;Why care about mission hedging on the margin rather than in equilibrium?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#a-big-list-of-considerations&quot; id=&quot;markdown-toc-a-big-list-of-considerations&quot;&gt;A big list of considerations&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-is-the-shape-of-the-utility-function&quot; id=&quot;markdown-toc-what-is-the-shape-of-the-utility-function&quot;&gt;What is the shape of the utility function?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#compound-until-agi-utility-function&quot; id=&quot;markdown-toc-compound-until-agi-utility-function&quot;&gt;Compound-until-AGI utility function&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#how-volatile-is-ai-progress&quot; id=&quot;markdown-toc-how-volatile-is-ai-progress&quot;&gt;How volatile is AI progress?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#industry-revenue&quot; id=&quot;markdown-toc-industry-revenue&quot;&gt;Industry revenue&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ml-benchmark-performance&quot; id=&quot;markdown-toc-ml-benchmark-performance&quot;&gt;ML benchmark performance&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ai-timeline-forecasts&quot; id=&quot;markdown-toc-ai-timeline-forecasts&quot;&gt;AI timeline forecasts&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#how-good-is-the-best-hedge&quot; id=&quot;markdown-toc-how-good-is-the-best-hedge&quot;&gt;How good is the best hedge?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#industry-index--industry-revenue&quot; id=&quot;markdown-toc-industry-index--industry-revenue&quot;&gt;Industry index &amp;lt;&amp;gt; industry revenue&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#semiconductor-index--ml-model-performance&quot; id=&quot;markdown-toc-semiconductor-index--ml-model-performance&quot;&gt;Semiconductor index &amp;lt;&amp;gt; ML model performance&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#ai-company-index--ml-model-performance&quot; id=&quot;markdown-toc-ai-company-index--ml-model-performance&quot;&gt;AI company index &amp;lt;&amp;gt; ML model performance&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#construct-a-derivative&quot; id=&quot;markdown-toc-construct-a-derivative&quot;&gt;Construct a derivative&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#can-we-do-better-without-a-custom-derivative&quot; id=&quot;markdown-toc-can-we-do-better-without-a-custom-derivative&quot;&gt;Can we do better (without a custom derivative)?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#results&quot; id=&quot;markdown-toc-results&quot;&gt;Results&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#why-does-ordinary-investing-probably-look-better-than-mission-hedging&quot; id=&quot;markdown-toc-why-does-ordinary-investing-probably-look-better-than-mission-hedging&quot;&gt;Why does ordinary investing (probably) look better than mission hedging?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#objections-to-this-methodology&quot; id=&quot;markdown-toc-objections-to-this-methodology&quot;&gt;Objections to this methodology&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#impact-investing&quot; id=&quot;markdown-toc-impact-investing&quot;&gt;Impact investing&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#measurability-of-ai-progress&quot; id=&quot;markdown-toc-measurability-of-ai-progress&quot;&gt;Measurability of AI progress&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#whats-the-strongest-case-in-favor-of-mission-hedging&quot; id=&quot;markdown-toc-whats-the-strongest-case-in-favor-of-mission-hedging&quot;&gt;What’s the strongest case in favor of mission hedging?&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#illegible-assumptions-could-bias-the-model-against-mission-hedging&quot; id=&quot;markdown-toc-illegible-assumptions-could-bias-the-model-against-mission-hedging&quot;&gt;Illegible assumptions could bias the model against mission hedging&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#the-volatility-of-ai-progress-could-increase&quot; id=&quot;markdown-toc-the-volatility-of-ai-progress-could-increase&quot;&gt;The volatility of AI progress could increase&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#investors-could-be-leverage-constrained&quot; id=&quot;markdown-toc-investors-could-be-leverage-constrained&quot;&gt;Investors could be leverage-constrained&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#ai-companies-could-beat-the-market&quot; id=&quot;markdown-toc-ai-companies-could-beat-the-market&quot;&gt;AI companies could beat the market&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#credences&quot; id=&quot;markdown-toc-credences&quot;&gt;Credences&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#future-work&quot; id=&quot;markdown-toc-future-work&quot;&gt;Future work&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#source-code&quot; id=&quot;markdown-toc-source-code&quot;&gt;Source code&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgments&quot; id=&quot;markdown-toc-acknowledgments&quot;&gt;Acknowledgments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendices&quot; id=&quot;markdown-toc-appendices&quot;&gt;Appendices&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-a-justifications-for-the-big-list-of-considerations&quot; id=&quot;markdown-toc-appendix-a-justifications-for-the-big-list-of-considerations&quot;&gt;Appendix A: Justifications for the big list of considerations&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#shape-of-the-utility-function-with-respect-to-wealth-bounded-constant-relative-risk-aversion&quot; id=&quot;markdown-toc-shape-of-the-utility-function-with-respect-to-wealth-bounded-constant-relative-risk-aversion&quot;&gt;Shape of the utility function with respect to wealth: bounded constant relative risk aversion&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#coefficient-of-relative-risk-aversion-2&quot; id=&quot;markdown-toc-coefficient-of-relative-risk-aversion-2&quot;&gt;Coefficient of relative risk aversion: 2&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#shape-of-the-distribution-of-investment-returns-log-normal&quot; id=&quot;markdown-toc-shape-of-the-distribution-of-investment-returns-log-normal&quot;&gt;Shape of the distribution of investment returns: log-normal&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#shape-of-the-distribution-of-ai-progress-log-normal&quot; id=&quot;markdown-toc-shape-of-the-distribution-of-ai-progress-log-normal&quot;&gt;Shape of the distribution of AI progress: log-normal&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#arithmetic-return-of-the-mvo-asset-5-gmp-or-9-fp&quot; id=&quot;markdown-toc-arithmetic-return-of-the-mvo-asset-5-gmp-or-9-fp&quot;&gt;Arithmetic return of the MVO asset: 5% (GMP) or 9% (FP)&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#arithmetic-return-of-the-legacy-asset-7&quot; id=&quot;markdown-toc-arithmetic-return-of-the-legacy-asset-7&quot;&gt;Arithmetic return of the legacy asset: 7%&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#arithmetic-return-of-the-hedge-asset-7&quot; id=&quot;markdown-toc-arithmetic-return-of-the-hedge-asset-7&quot;&gt;Arithmetic return of the hedge asset: 7%&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#volatility-of-the-mvo-asset-9-gmp-or-13-fp&quot; id=&quot;markdown-toc-volatility-of-the-mvo-asset-9-gmp-or-13-fp&quot;&gt;Volatility of the MVO asset: 9% (GMP) or 13% (FP)&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#volatility-of-the-legacy-asset-30&quot; id=&quot;markdown-toc-volatility-of-the-legacy-asset-30&quot;&gt;Volatility of the legacy asset: 30%&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#volatility-of-the-hedge-asset-25&quot; id=&quot;markdown-toc-volatility-of-the-hedge-asset-25&quot;&gt;Volatility of the hedge asset: 25%&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#correlation-of-mvo-asset--legacy-asset-06-gmp-or-05-fp&quot; id=&quot;markdown-toc-correlation-of-mvo-asset--legacy-asset-06-gmp-or-05-fp&quot;&gt;Correlation of MVO asset &amp;lt;&amp;gt; legacy asset: 0.6 (GMP) or 0.5 (FP)&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#correlation-of-mvo-asset--hedge-asset-06-gmp-or-05-fp&quot; id=&quot;markdown-toc-correlation-of-mvo-asset--hedge-asset-06-gmp-or-05-fp&quot;&gt;Correlation of MVO asset &amp;lt;&amp;gt; hedge asset: 0.6 (GMP) or 0.5 (FP)&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#correlation-of-legacy-asset--hedge-asset-07&quot; id=&quot;markdown-toc-correlation-of-legacy-asset--hedge-asset-07&quot;&gt;Correlation of legacy asset &amp;lt;&amp;gt; hedge asset: 0.7&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#correlation-of-mvo-asset--ai-progress-00-gmp-or-01-fp&quot; id=&quot;markdown-toc-correlation-of-mvo-asset--ai-progress-00-gmp-or-01-fp&quot;&gt;Correlation of MVO asset &amp;lt;&amp;gt; AI progress: 0.0 (GMP) or –0.1 (FP)&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#correlation-of-legacy-asset--ai-progress-02&quot; id=&quot;markdown-toc-correlation-of-legacy-asset--ai-progress-02&quot;&gt;Correlation of legacy asset &amp;lt;&amp;gt; AI progress: 0.2&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#impact-investing-factor-ignored&quot; id=&quot;markdown-toc-impact-investing-factor-ignored&quot;&gt;Impact investing factor: ignored&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#target-volatility-30&quot; id=&quot;markdown-toc-target-volatility-30&quot;&gt;Target volatility: 30%&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#cost-of-leverage-2&quot; id=&quot;markdown-toc-cost-of-leverage-2&quot;&gt;Cost of leverage: 2%&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#proportion-of-portfolio-currently-invested-in-mvolegacyhedge-assets-50--50--0&quot; id=&quot;markdown-toc-proportion-of-portfolio-currently-invested-in-mvolegacyhedge-assets-50--50--0&quot;&gt;Proportion of portfolio currently invested in MVO/legacy/hedge assets: 50% / 50% / 0%&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-b-ai-progress-by-model-size-growth&quot; id=&quot;markdown-toc-appendix-b-ai-progress-by-model-size-growth&quot;&gt;Appendix B: AI progress by model size growth&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-c-optimal-equilibrium-allocation-to-mission-hedging&quot; id=&quot;markdown-toc-appendix-c-optimal-equilibrium-allocation-to-mission-hedging&quot;&gt;Appendix C: Optimal equilibrium allocation to mission hedging&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#optimal-allocation-under-compound-until-agi-utility&quot; id=&quot;markdown-toc-optimal-allocation-under-compound-until-agi-utility&quot;&gt;Optimal allocation under “compound-until-AGI” utility&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;methodology&quot;&gt;Methodology&lt;/h1&gt;

&lt;p&gt;Let’s start with the &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/#setup&quot;&gt;framework&lt;/a&gt; that I laid out in the previous post. To recap, this framework considers three investable assets:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;a &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/#results-with-a-legacy-investment&quot;&gt;legacy asset&lt;/a&gt;: a suboptimal investment that we currently hold and want to sell&lt;/li&gt;
  &lt;li&gt;a &lt;a href=&quot;https://www.investopedia.com/terms/m/meanvariance-analysis.asp&quot;&gt;mean-variance&lt;/a&gt; optimal (MVO) asset: the best thing we could buy if we simply wanted to maximize risk-adjusted return&lt;/li&gt;
  &lt;li&gt;a mission hedge asset: something that correlates with AI progress, e.g., AI company stocks&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We also have a &lt;strong&gt;mission target&lt;/strong&gt;, which is a measure of the thing we want to hedge. For our purposes, the mission target is AI progress.&lt;/p&gt;

&lt;p&gt;We can sell a little bit of the legacy asset and invest the proceeds into either the MVO asset or the hedge asset. The model tells us which asset we should choose.&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;We feed the model a few inputs:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;the expected return, volatility, and correlations for these four variables (three assets plus the mission target)&lt;/li&gt;
  &lt;li&gt;a utility function over wealth and the mission target&lt;/li&gt;
  &lt;li&gt;our starting level of investment in the three assets&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The model tells us which asset we should shift our investments toward &lt;a href=&quot;#why-care-about-mission-hedging-on-the-margin-rather-than-in-equilibrium&quot;&gt;on the margin&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;(Note: I have tried to make this essay understandable on its own, but it will probably still be easier to understand if you read &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/&quot;&gt;my previous essay&lt;/a&gt; first.)&lt;/p&gt;

&lt;p&gt;My model includes many variables. For most of the variables, I assigned fixed values to them—see &lt;a href=&quot;#a-big-list-of-considerations&quot;&gt;below&lt;/a&gt; for a big list of model considerations. I fixed the value of a variable wherever one of the following was true: (1) I was confident that it was (roughly) correct; (2) I was confident that it would be too hard to come up with a better estimate; (3) it didn’t affect the outcome much; or (4) it was necessary to make the model tractable.&lt;/p&gt;

&lt;p&gt;That left three considerations:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;What is the relationship between AI progress and the utility of money?&lt;/li&gt;
  &lt;li&gt;How volatile is AI progress?&lt;/li&gt;
  &lt;li&gt;What investment has the strongest correlation to AI progress, how strong is that correlation, and what is its expected return?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So the question of whether to mission hedge (according to this model) comes down to how we answer these three questions.&lt;/p&gt;

&lt;h2 id=&quot;why-care-about-mission-hedging-on-the-margin-rather-than-in-equilibrium&quot;&gt;Why care about mission hedging on the margin rather than in equilibrium?&lt;/h2&gt;

&lt;p&gt;When evaluating our investment portfolios, we can consider two distinct questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;What is the optimal allocation to mission hedging in equilibrium—in other words, what is the ultimate allocation to mission hedging that maximizes utility?&lt;/li&gt;
  &lt;li&gt;On the margin, what is the expected utility of allocating the next dollar to mission hedging vs. traditional investing?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This essay focuses on the second question because it matters more. Why does it matter more?&lt;/p&gt;

&lt;p&gt;In the long run, we want to know the optimal asset allocation. But we can’t instantly move to the optimal allocation. Many altruistic investors have money tied up in assets that they can’t sell quickly; many value-aligned investors don’t put much thought into how they invest, and end up doing something suboptimal, so we need to &lt;a href=&quot;https://mdickens.me/2022/03/18/altruistic_investors_care_about_aggregate_altruistic_portfolio/&quot;&gt;account for this&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I expect it will take a bare minimum of 10 years to for the EA community to move from the current investment allocation to the optimal one.&lt;sup id=&quot;fnref:44&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:44&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; Until then, we want to know what changes will maximize utility in the short run. Therefore, we need to identify the best change on the margin.&lt;/p&gt;

&lt;h2 id=&quot;a-big-list-of-considerations&quot;&gt;A big list of considerations&lt;/h2&gt;

&lt;p&gt;Considerations that matter, with the key considerations in bold:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Shape of the utility function with respect to wealth&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Shape of the utility function with respect to AI progress&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Coefficient of relative risk aversion&lt;/li&gt;
  &lt;li&gt;Shape of the distribution of investment returns&lt;/li&gt;
  &lt;li&gt;Shape of the distribution of AI progress&lt;/li&gt;
  &lt;li&gt;Expected return of the MVO asset&lt;/li&gt;
  &lt;li&gt;Expected return of the legacy asset&lt;/li&gt;
  &lt;li&gt;Expected return of the hedge asset&lt;/li&gt;
  &lt;li&gt;Volatility of the MVO asset&lt;/li&gt;
  &lt;li&gt;Volatility of the legacy asset&lt;/li&gt;
  &lt;li&gt;Volatility of the hedge asset&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Volatility of AI progress&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Correlation of the MVO asset and the legacy asset&lt;/li&gt;
  &lt;li&gt;Correlation of the MVO asset and the hedge asset&lt;/li&gt;
  &lt;li&gt;Correlation of the legacy asset and the hedge asset&lt;/li&gt;
  &lt;li&gt;Correlation of the MVO asset and AI progress&lt;/li&gt;
  &lt;li&gt;Correlation of the legacy asset and AI progress&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Correlation of the hedge asset and AI progress&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Impact investing factor&lt;/li&gt;
  &lt;li&gt;Target volatility&lt;/li&gt;
  &lt;li&gt;Cost of leverage&lt;/li&gt;
  &lt;li&gt;Proportion of portfolio currently invested in MVO/legacy/hedge assets&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A consideration that you might think matters, but doesn’t:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Expected rate of AI progress — doesn’t affect optimal hedging (for an explanation, see the third qualitative observation &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/#general-solution&quot;&gt;here&lt;/a&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For all the non-key considerations, I assign them a single fixed value. The only exception is the return/volatility of the MVO asset, where I included numbers for both a &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/#factor-investing&quot;&gt;factor portfolio&lt;/a&gt; and the global market portfolio. I expect a factor portfolio to earn a better return and I believe it can be used as the MVO asset. But I don’t expect to convince everyone of this, so for skeptical readers, I also included numbers for the global market portfolio.&lt;/p&gt;

&lt;p&gt;In the following list, I assign values to every non-crucial consideration. For justifications of each value, see &lt;a href=&quot;#appendix-a-justifications-for-the-big-list-of-considerations&quot;&gt;Appendix A&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For all values relating to the MVO asset, I present results both for the global market portfolio (GMP) and for a factor portfolio (FP).&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Shape of the utility function with respect to wealth: bounded constant relative risk aversion&lt;/li&gt;
  &lt;li&gt;Shape of the distribution of investment returns: log-normal&lt;/li&gt;
  &lt;li&gt;Shape of the distribution of AI progress: log-normal&lt;/li&gt;
  &lt;li&gt;Coefficient of relative risk aversion: 1.5&lt;/li&gt;
  &lt;li&gt;Arithmetic return of the MVO asset (nominal): 5% (GMP) or 9% (FP)&lt;sup id=&quot;fnref:34&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:34&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Arithmetic return of the legacy asset: 7%&lt;/li&gt;
  &lt;li&gt;Arithmetic return of the hedge asset: 7%&lt;/li&gt;
  &lt;li&gt;Volatility of the MVO asset: 9% (GMP) or 13% (FP)&lt;/li&gt;
  &lt;li&gt;Volatility of the legacy asset: 30%&lt;/li&gt;
  &lt;li&gt;Volatility of the hedge asset: 25%&lt;/li&gt;
  &lt;li&gt;Correlation of the MVO asset and the legacy asset: 0.6 (GMP) or 0.5 (FP)&lt;/li&gt;
  &lt;li&gt;Correlation of the MVO asset and the hedge asset: 0.6 (GMP) or 0.5 (FP)&lt;/li&gt;
  &lt;li&gt;Correlation of the legacy asset and the hedge asset: 0.7&lt;/li&gt;
  &lt;li&gt;Correlation of the MVO asset and AI progress: 0.0 (GMP) or –0.1 (FP)&lt;/li&gt;
  &lt;li&gt;Correlation of the legacy asset and AI progress: 0.2&lt;/li&gt;
  &lt;li&gt;Impact investing factor: ignored&lt;/li&gt;
  &lt;li&gt;Target volatility: 30%&lt;/li&gt;
  &lt;li&gt;Cost of leverage: 2%&lt;/li&gt;
  &lt;li&gt;Proportion of portfolio currently invested in MVO/legacy/hedge assets: 50% / 50% / 0%&lt;/li&gt;
&lt;/ol&gt;

&lt;h1 id=&quot;what-is-the-shape-of-the-utility-function&quot;&gt;What is the shape of the utility function?&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Caveat: Someone who works on AI safety would probably be better positioned than me to think about the shape of the utility function. But I’m going to speculate on it anyway.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When I &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/#the-utility-function&quot;&gt;initially developed&lt;/a&gt; a model of mission hedging, I specified that utility varies linearly with the mission target. That makes perfect sense if the mission target is something like CO2 emissions. It’s not clear that it makes sense for AI. Either transformative AI goes well and it’s amazing, or it goes poorly and everyone dies. It’s binary, not linear.&lt;/p&gt;

&lt;p&gt;Consider an alternative utility function with these properties:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Transformative AI happens when some unknown amount of AI progress has occurred. The required amount of progress follows some known probability distribution.&lt;/li&gt;
  &lt;li&gt;If some unknown amount of alignment research gets done before then, transformative AI goes well. Utility = 1. (The units of our utility function are normalized such that the total value of the light cone is 1.) Otherwise, transformative AI goes badly. Utility = 0. The required amount of alignment research follows some known probability distribution.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This utility function (kind of) describes reality better, but it doesn’t really work—it heavily depends on what probability distributions you pick. The model only puts high value on AI alignment research if there’s a high probability that, if you don’t do anything, the amount of alignment research will be very close to the exact required amount. If there’s a high probability that enough alignment research gets done without you, or that we’ll never get enough research done, then the model says you shouldn’t bother. The model becomes highly sensitive to forecasts of AI timelines and of the amount of safety research required.&lt;/p&gt;

&lt;p&gt;(This utility function might imply that we should &lt;a href=&quot;https://forum.effectivealtruism.org/posts/YApaCr98Q7wSYcaoB/mission-correlation-more-than-just-hedging&quot;&gt;mission leverage&lt;/a&gt;, not mission hedge. If AI progress accelerates, that decreases the probability that we will be able to get enough safety research done, which might decrease the marginal utility of research.)&lt;/p&gt;

&lt;p&gt;This binary way of thinking doesn’t reflect the way people actually think about the value of research. People tend to believe that if less safety research gets done, that makes marginal research more valuable, not less.&lt;/p&gt;

&lt;p&gt;I would think of research in terms of drawing ideas from an urn. Every idea you draw might be the one you need. You start with the most promising ideas and then move on to less promising ones, so you experience diminishing marginal utility of research. And if AI progress moves faster, then marginal alignment research matters more because the last draw from the urn has higher expected utility.&lt;/p&gt;

&lt;p&gt;But if transformative AI will take a long time, that means we get many chances to draw from the urn of ideas. So even if you draw a crucial idea now, there’s a good chance that someone else would have drawn it later anyway—your counterfactual impact is smaller in expectation. And the probability that someone else would have drawn the same idea is inversely proportional to the length of time until transformative AI. So we’re back to the original assumption that utility is linear with AI progress.&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h3 id=&quot;compound-until-agi-utility-function&quot;&gt;Compound-until-AGI utility function&lt;/h3&gt;

&lt;p&gt;We could use an alternative model, which we might call the “compound-until-AGI” model:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;As before, every dollar spent has some diminishing probability of solving AI alignment.&lt;/li&gt;
  &lt;li&gt;Wealth continues to compound until transformative AI emerges, which happens at some uncertain future date.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If we say W = wealth, b = the mission target—in this case, the inverse of time until transformative AI—then the utility function becomes:&lt;/p&gt;

&lt;p&gt;\begin{align}
U(W, b) = (W^b)^{1 - \gamma} \displaystyle\frac{1}{1 - \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;This model heavily disfavors mission hedging, to the point that &lt;strong&gt;I could not find any reasonable combination of parameters that favored mission hedging on the margin over traditional investing.&lt;/strong&gt;&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;For the rest of this essay, I will focus on the original model, with the understanding that the compound-until-AGI model may be more accurate, but it disprefers mission hedging for any plausible empirical parameters.&lt;/p&gt;

&lt;p&gt;The utility function is a weak point in my model. I basically made up these utility functions without any strong justification. And a different function could generate significantly different prescriptions.&lt;/p&gt;

&lt;h1 id=&quot;how-volatile-is-ai-progress&quot;&gt;How volatile is AI progress?&lt;/h1&gt;

&lt;p&gt;I measured AI progress (or proxies for AI progress) in three different ways:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Average industry revenue across all US industries &lt;a href=&quot;#industry-revenue&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Performance on ML benchmarks &lt;a href=&quot;#ml-benchmark-performance&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;AI timeline forecasts &lt;a href=&quot;#ai-timeline-forecasts&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These measures go from most to least robust, and from least to most relevant. We can robustly estimate industry revenue because we have a large quantity of historical data, but it’s only loosely relevant. AI timeline forecasts are highly relevant, but we have barely any data.&lt;/p&gt;

&lt;p&gt;This table provides estimated standard deviations according to various methods. The table does not give any further context, but I explain all these estimates in their respective sections below.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Category&lt;/th&gt;
      &lt;th&gt;Method&lt;/th&gt;
      &lt;th&gt;\(\sigma\)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;1. revenue&lt;/td&gt;
      &lt;td&gt;weighted average (long-only)&lt;/td&gt;
      &lt;td&gt;17.6%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;1. revenue&lt;/td&gt;
      &lt;td&gt;weighted average (market-neutral)&lt;/td&gt;
      &lt;td&gt;19.9%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2. ML benchmarks&lt;/td&gt;
      &lt;td&gt;average across all benchmarks&lt;/td&gt;
      &lt;td&gt;15.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;3. forecasts&lt;/td&gt;
      &lt;td&gt;Metaculus (point-in-time)&lt;/td&gt;
      &lt;td&gt;8.9%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;3. forecasts&lt;/td&gt;
      &lt;td&gt;Metaculus (across time)&lt;/td&gt;
      &lt;td&gt;4.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;3. forecasts&lt;/td&gt;
      &lt;td&gt;Grace et al. (2017) survey&lt;/td&gt;
      &lt;td&gt;N/A&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Estimates range from 4.3% to 19.9%, with an average of 12.3% and a category-weighted average of 13.7%.&lt;/p&gt;

&lt;p&gt;The next table gives the standard deviation over the logarithm of AI timeline forecasts. (See &lt;a href=&quot;#ai-timeline-forecasts&quot;&gt;below&lt;/a&gt; for how to interpret these numbers.)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Method&lt;/th&gt;
      &lt;th&gt;\(\sigma\)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Metaculus&lt;/td&gt;
      &lt;td&gt;0.99&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Grace et al. (low estimate)&lt;/td&gt;
      &lt;td&gt;1.20&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Grace et al. (high estimate)&lt;/td&gt;
      &lt;td&gt;1.32&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id=&quot;industry-revenue&quot;&gt;Industry revenue&lt;/h2&gt;

&lt;p&gt;How do we quantify the success of an industry? Ideally, we’d track the &lt;a href=&quot;https://mdickens.me/2021/08/18/mission_hedging_quantity_price/&quot;&gt;quantity of production&lt;/a&gt;, but I don’t have good data on that, so I’ll look at revenue. Revenue is a reasonable proxy for success. Revenue and stock price are probably more closely correlated than production quantity and stock price because there’s a more direct path from revenue to shareholder value.&lt;/p&gt;

&lt;p&gt;Rather than just looking at the ML industry (which is only one industry, and didn’t even exist until recently), I looked at every industry. For each industry, I calculated the growth in aggregate revenue across the industry, and then found the volatility of growth.&lt;/p&gt;

&lt;p&gt;It’s not obvious how to aggregate the volatility numbers for every industry. We could take the mean, but there are some tiny industries with only a few small-cap companies, and those ones have super high volatility. So the average volatility across all industries doesn’t represent a reasonable expectation about what the ML industry might look like in the future.&lt;/p&gt;

&lt;p&gt;We could take the median, but that doesn’t capture skewness in industry volatility.&lt;/p&gt;

&lt;p&gt;I believe the best metric is a weighted average, weighted by total market cap of the industry. So larger industries get a greater weight when calculating average volatility.&lt;/p&gt;

&lt;p&gt;I calculated the volatility of industry revenue from 1973 to 2013&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;. I also calculated the volatility of &lt;em&gt;market-neutral&lt;/em&gt; industry revenue, which is the revenue per share of an industry minus the revenue per share of the total market.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The results:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Long-Only&lt;/th&gt;
      &lt;th&gt;Market-Neutral&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Weighted Mean&lt;/td&gt;
      &lt;td&gt;17.6%&lt;/td&gt;
      &lt;td&gt;19.9%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Mean&lt;/td&gt;
      &lt;td&gt;18.2%&lt;/td&gt;
      &lt;td&gt;20.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Median&lt;/td&gt;
      &lt;td&gt;14.1%&lt;/td&gt;
      &lt;td&gt;17.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Std Dev&lt;/td&gt;
      &lt;td&gt;14.5%&lt;/td&gt;
      &lt;td&gt;8.8%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(&lt;code&gt;Std Dev&lt;/code&gt; is the standard deviation of the standard deviations.)&lt;/p&gt;

&lt;p&gt;For context, there were 66 industries as defined by &lt;a href=&quot;https://en.wikipedia.org/wiki/Global_Industry_Classification_Standard&quot;&gt;GICS&lt;/a&gt;. The average industry contained 17 stocks in 1973 and 47 stocks in 2013.&lt;/p&gt;

&lt;h2 id=&quot;ml-benchmark-performance&quot;&gt;ML benchmark performance&lt;/h2&gt;

&lt;p&gt;Fortunately, other people have already put a lot of work into measuring the performance over time of state-of-the-art ML models.&lt;/p&gt;

&lt;p&gt;Unfortunately, they all care about average improvement or trends in improvement, not about volatility.&lt;/p&gt;

&lt;p&gt;Fortunately, some of them have made their data publicly available, so we can calculate the volatility using their data.&lt;/p&gt;

&lt;p&gt;I used the Electronic Frontier Foundation’s &lt;a href=&quot;https://www.eff.org/ai/metrics&quot;&gt;AI Progress Measurement&lt;/a&gt; database, which aggregates ML model scores across dozens of benchmarks, including Chess Elo rating, Atari game performance, image classification loss, and language parsing accuracy, to name a few. The data ranges from 1984 to 2019, although until 2003, it only included a single measure of AI performance (Chess elo), and there still weren’t many metrics until 2015.&lt;/p&gt;

&lt;p&gt;For each metric-year, I calculated the growth in performance on that metric since the previous year, where performance is defined as loss (or inverse loss, if you want a number that increases rather than decreases&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;). Then I took the average growth across all metrics.&lt;/p&gt;

&lt;p&gt;I calculated the mean and standard deviation of the logarithm of growth over three time periods: the full period (1985–2019&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;), just the most recent 10 years (2010–2019), and just the period with many metrics (2015–2019):&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Range&lt;/th&gt;
      &lt;th&gt;Mean&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;1985–2019&lt;/td&gt;
      &lt;td&gt;10.7%&lt;/td&gt;
      &lt;td&gt;15.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2010–2019&lt;/td&gt;
      &lt;td&gt;29.6%&lt;/td&gt;
      &lt;td&gt;15.8%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2015–2019&lt;/td&gt;
      &lt;td&gt;40.6%&lt;/td&gt;
      &lt;td&gt;15.7%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Before running these numbers, I worried that I’d get dramatically different numbers for the standard deviation depending on what range I used. But luckily, all three standard deviations fall within 0.3 percentage points (even though the averages vary a lot). These numbers (weakly) suggest that, even though ML progress has been accelerating, volatility hasn’t increased. (Although if we expect ML progress to continue accelerating, that means our subjective volatility should be higher.)&lt;/p&gt;

&lt;h2 id=&quot;ai-timeline-forecasts&quot;&gt;AI timeline forecasts&lt;/h2&gt;

&lt;p&gt;As a final test, I calculated AI progress from AI timeline forecasts. In a sense, this is the best metric, because it’s the only one that really captures the type of progress we care about. But in another sense, it’s the worst metric, because (1) we don’t have any particularly good forecasts, and (2) how do you convert a timeline into a growth rate?&lt;/p&gt;

&lt;p&gt;To convert an AI timeline into a growth rate, I simply took the inverse of the number of years until transformative AI. This is a questionable move because it assumes (1) AI progresses linearly and (2) there exists some meaningful notion of a growth rate that describes what AI needs to do to get to transformative AI from where we are now.&lt;/p&gt;

&lt;p&gt;Questionableness aside, I used &lt;a href=&quot;https://www.metaculus.com/questions/3479/&quot;&gt;Metaculus question 3479&lt;/a&gt;, “When will the first AGI be first developed and demonstrated?” because it’s the most-answered Metaculus question about AGI, with (as of 2022-04-14) 1070 predictions across 363 forecasters.&lt;sup id=&quot;fnref:37&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:37&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;First I looked at the standard deviation of the inverse of the mean date across forecasters, which came out to 8.6%. But this isn’t quite what we want. The point of mission hedging is to buy an asset that covaries with your mission target over time. So I also looked at the Metaculus community prediction (an aggregate best guess across all predictors) across time, and found a standard deviation of 4.1%.&lt;/p&gt;

&lt;p&gt;I also applied AI timeline forecasts to the &lt;a href=&quot;#compound-until-agi-utility-function&quot;&gt;“compound-until-AGI”&lt;/a&gt; model, which has a more meaningful interpretation. This alternative model takes a log-normally distributed AI timeline and calculates the expected probability of solving the alignment problem by the time transformative AI arrives. For this model, we want to know the standard deviation of the logarithm of estimated AI timelines.&lt;/p&gt;

&lt;p&gt;Metaculus question #3479 gives a mean log-timeline of 3.09 log-years and a standard deviation of 0.99 log-years.&lt;/p&gt;

&lt;p&gt;(The exponential of 0.99 log-years would be 2.69 years, except that we can’t meaningfully talk about the standard deviation in terms of years. What we &lt;em&gt;can&lt;/em&gt; meaningfully say is that the geometric mean timeline is \(e^{3.09}\) = 22 years, and the timeline at +1 standard deviation is \(e^{3.09 + 0.99}\) = 59 years. And 59 / 22 = 2.69. So 2.69 is more like the “year growth factor”—every one standard deviation, the timeline multiplies by 2.69.)&lt;/p&gt;

&lt;p&gt;I also used the “High Level Machine Intelligence” forecast from the expert survey in &lt;a href=&quot;https://arxiv.org/abs/1705.08807&quot;&gt;Grace et al. (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:33&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:33&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;. The paper gives a mean of 3.8 log-years (corresponding to a median&lt;sup id=&quot;fnref:32&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:32&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; timeline of 45 years). The paper does not report exact the standard deviation, but I estimated it from the first and third quartiles and came up with 1.26 log-years.&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;As discussed &lt;a href=&quot;/#compound-until-agi-utility-function&quot;&gt;previously&lt;/a&gt;, I could not find any reasonable inputs that favored mission hedging under the compound-until-AGI model. (I tried the figures derived from Metaculus and Grace et al. as well as a much wider range of parameter values.)&lt;/p&gt;

&lt;h1 id=&quot;how-good-is-the-best-hedge&quot;&gt;How good is the best hedge?&lt;/h1&gt;

&lt;p&gt;We could hedge AI progress with a basket of AI company stocks. But it’s hard to tell how well this will work:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The companies that do the most/best AI research are big companies like Google. AI only makes up a small portion of what Google does, and its progress on AI doesn’t drive the stock price (at least for now).&lt;/li&gt;
  &lt;li&gt;Some small companies exclusively do AI work, but not many, and they haven’t existed for long.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(I looked for ETFs and mutual funds that focus on AI stocks. About a dozen such funds exist, but all but one of them are less than four years old. The oldest is &lt;a href=&quot;https://www.etf.com/ARKQ&quot;&gt;ARKQ&lt;/a&gt;, which only launched in 2014, and doesn’t quite match what we want anyway.)&lt;/p&gt;

&lt;p&gt;Let’s look at how some possible hedges correlate to their targets.&lt;/p&gt;

&lt;h2 id=&quot;industry-index--industry-revenue&quot;&gt;Industry index &amp;lt;&amp;gt; industry revenue&lt;/h2&gt;

&lt;p&gt;What’s the relationship between AI progress and a basket of AI company stocks?&lt;/p&gt;

&lt;p&gt;Let’s ask an easier question: For any industry, what is the relationship between the productivity of that industry and its average stock return?&lt;/p&gt;

&lt;p&gt;This question is only relevant insofar as the AI industry behaves the same as any other industry, and insofar as “revenue” is the same thing as “productivity”. But it gives us a much larger sample (dozens of industries instead of one) and a longer history (~40 years instead of ~7).&lt;/p&gt;

&lt;figure&gt;
&lt;img src=&quot;/assets/images/superficially-similar.png&quot; /&gt;
&lt;figcaption&gt;this but unironically&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;p&gt;As &lt;a href=&quot;#industry-revenue&quot;&gt;above&lt;/a&gt;, we can measure the success of an industry in terms of its revenue. And as above, I calculated the weighted average correlation between industry revenues and returns, and also between market-neutral industry revenues and returns.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Long-Only&lt;/th&gt;
      &lt;th&gt;Market-Neutral&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Weighted Mean&lt;/td&gt;
      &lt;td&gt;0.21&lt;/td&gt;
      &lt;td&gt;0.04&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Mean&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;0.06&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Median&lt;/td&gt;
      &lt;td&gt;0.25&lt;/td&gt;
      &lt;td&gt;0.06&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Std Dev&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;0.22&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;These correlations are really bad. The weighted mean long-only correlation is actually lower than my most pessimistic estimate in my &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/&quot;&gt;original essay&lt;/a&gt;. And the market-neutral correlation, which better captures what we care about, might as well be zero.&lt;/p&gt;

&lt;h2 id=&quot;semiconductor-index--ml-model-performance&quot;&gt;Semiconductor index &amp;lt;&amp;gt; ML model performance&lt;/h2&gt;

&lt;p&gt;The &lt;a href=&quot;https://en.wikipedia.org/wiki/Global_Industry_Classification_Standard&quot;&gt;Global Industry Classification Standard&lt;/a&gt; does not recognize an AI/ML industry. The closest thing is probably semiconductors. For this test, I used the MSCI World Semiconductors and Semiconductor Equipment Index (SSEI), which has data going back to 2008. I compared it to three benchmarks:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;EFF ML benchmark performance, described &lt;a href=&quot;#ml-benchmark-performance&quot;&gt;above&lt;/a&gt;, 2008–2021&lt;/li&gt;
  &lt;li&gt;ML model size growth, measured as a power function of compute (for reasons explained in &lt;a href=&quot;#appendix-b-ai-progress-by-model-size-growth&quot;&gt;Appendix B&lt;/a&gt;), 2012–2021&lt;/li&gt;
  &lt;li&gt;ML model size growth, measured as a power function of the number of parameters, 2012–2021&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I also compared these to&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;the S&amp;amp;P 500&lt;/li&gt;
  &lt;li&gt;a long/short portfolio that went long SSEI and short the S&amp;amp;P 500&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Results:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Target&lt;/th&gt;
      &lt;th&gt;SSEI&lt;/th&gt;
      &lt;th&gt;SP500&lt;/th&gt;
      &lt;th&gt;Long/Short&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;EFF ML benchmark performance&lt;/td&gt;
      &lt;td&gt;0.17&lt;/td&gt;
      &lt;td&gt;0.15&lt;/td&gt;
      &lt;td&gt;0.17&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ML model compute^0.05&lt;/td&gt;
      &lt;td&gt;0.33&lt;/td&gt;
      &lt;td&gt;0.20&lt;/td&gt;
      &lt;td&gt;0.41&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ML model num. params^0.076&lt;/td&gt;
      &lt;td&gt;0.52&lt;/td&gt;
      &lt;td&gt;0.50&lt;/td&gt;
      &lt;td&gt;0.46&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;These numbers suggest that there’s no reason to use semiconductor stocks as a hedge because the S&amp;amp;P 500 works about as well.&lt;/p&gt;

&lt;h2 id=&quot;ai-company-index--ml-model-performance&quot;&gt;AI company index &amp;lt;&amp;gt; ML model performance&lt;/h2&gt;

&lt;p&gt;There’s no good way to test the correlation of ML model performance to an AI index. At best, I could take one of the AI company ETFs that launched in 2018 and correlate it against ML benchmark performance. But I’d only get at most three annual data points, and that would just anchor my beliefs to a highly uncertain number, so I’m not going to do it.&lt;/p&gt;

&lt;p&gt;Perhaps we could construct a hedge portfolio in such a way that we can predict its correlation to the thing we want to hedge, even if we don’t have any historical data. I don’t know how to do that.&lt;/p&gt;

&lt;h2 id=&quot;construct-a-derivative&quot;&gt;Construct a derivative&lt;/h2&gt;

&lt;p&gt;Could we define a new derivative to hedge exactly what we want? For example, we could pay an investment bank to sell us an “AI progress swap”, that goes up in value when progress accelerates, and goes down when it slows.&lt;/p&gt;

&lt;p&gt;A derivative like this would have a negative return—probably significantly negative, because novel financial instruments tend to be expensive. And unfortunately, a higher correlation can’t make up for even a modest negative return. A derivative like this wouldn’t work unless we could buy it very cheaply.&lt;/p&gt;

&lt;h2 id=&quot;can-we-do-better-without-a-custom-derivative&quot;&gt;Can we do better (without a custom derivative)?&lt;/h2&gt;

&lt;p&gt;A naive hedge (such as a semiconductor index) only gets us about 0.2 correlation. Surely a carefully-constructed hedge would have a higher correlation than that. But how much higher? There’s no way to directly test that question empirically, and I don’t even know what a useful reference class might be. So it’s hard to say how high the correlation can go.&lt;/p&gt;

&lt;p&gt;Let’s take r=0.5 as an upper bound. That tends to be about the best correlation you can get between two variables in a messy system. I’m inclined to say that’s too high—my intuition is that you probably can’t get more than a 0.4 correlation, on the basis that more than half the correlation of a hedge comes from obvious things like industry behavior, and the obvious things get you about r=0.2. But I wouldn’t be too surprised if r&amp;gt;0.5 turned out to be achievable.&lt;/p&gt;

&lt;p&gt;(In my &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/&quot;&gt;previous post&lt;/a&gt; on mission-correlated investing, where I set up the theoretical model, I gave a plausible correlation range of 0.25 to 0.9. Now that I’ve spent more time looking at empirical data, I think 0.9 was too optimistic, even for an upper-end estimate.)&lt;/p&gt;

&lt;h1 id=&quot;results&quot;&gt;Results&lt;/h1&gt;

&lt;p&gt;If we take my &lt;a href=&quot;https://mdickens.me/2022/04/01/how_I_estimate_future_investment_returns/&quot;&gt;estimates&lt;/a&gt; for the return of a concentrated factor portfolio, mission hedging looks unfavorable even given optimistic volatility and correlation estimates.&lt;/p&gt;

&lt;p&gt;If we invest in the global market portfolio, whether to mission hedge depends on our choices for the key parameter values (AI progress volatility and hedge &amp;lt;&amp;gt; target correlation).&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;At 20% volatility, mission hedging looks unfavorable given a naive hedge (r=0.2), but favorable if we can find a hedge with even a modestly better correlation (r&amp;gt;=0.25).&lt;/li&gt;
  &lt;li&gt;At 15% volatility, mission hedging looks favorable if the hedge has r=0.3 or greater.&lt;/li&gt;
  &lt;li&gt;At 10% volatility, mission hedging looks favorable only if the hedge has r=0.5 or greater.&lt;/li&gt;
  &lt;li&gt;At less than 10% volatility, mission hedging does not look favorable for any plausible correlation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we use the &lt;a href=&quot;#compound-until-agi-utility-function&quot;&gt;compound-until-AGI&lt;/a&gt; utility function, mission hedging looks worse than ordinary investing given any reasonable input parameters (although under the most favorable parameters for mission hedging, it only loses by a thin margin).&lt;/p&gt;

&lt;p&gt;That’s all assuming our hedge matches the expected return of a broad index. If we expect the hedge to outperform the market, then it’s highly likely to look favorable compared to the global market portfolio, and likely unfavorable versus a factor portfolio (see &lt;a href=&quot;#ai-companies-could-beat-the-market&quot;&gt;“AI companies could beat the market”&lt;/a&gt; below for specific estimates). At least, that’s true given all the other assumptions I made, but my &lt;a href=&quot;#appendix-a-justifications-for-the-big-list-of-considerations&quot;&gt;other assumptions&lt;/a&gt; tended to favor mission hedging.&lt;/p&gt;

&lt;p&gt;If you want to play with my model, it’s available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/MissionHedging.hs&quot;&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;why-does-ordinary-investing-probably-look-better-than-mission-hedging&quot;&gt;Why does ordinary investing (probably) look better than mission hedging?&lt;/h2&gt;

&lt;p&gt;In short, leveraged investments look &lt;a href=&quot;https://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints/&quot;&gt;compelling&lt;/a&gt;, and they set a high bar that mission hedging struggles to overcome. At the same level of volatility, a leveraged diversified portfolio outperforms a hedge portfolio (in expectation) by a significant margin.&lt;/p&gt;

&lt;p&gt;Most philanthropic investors don’t use enough leverage, which means on the margin, adding leverage to your portfolio increases expected utility by a lot. To compare favorably, mission hedging would have to increase expected utility by even more.&lt;/p&gt;

&lt;p&gt;By mission hedging, you’re sacrificing expected earnings in exchange for making money in worlds where money matters more. If a leveraged market portfolio has (say) 5 percentage points higher expected return than the hedge portfolio, then the hedge needs to perform 5 percentage points better than average &lt;em&gt;just to break even&lt;/em&gt;.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt; So even if you find yourself in one of the worlds you were trying to hedge against (a world with faster-than-expected AI progress), it’s likely that a leveraged market portfolio outperforms the hedge asset &lt;em&gt;even in that world&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Mission hedging doesn’t work as well as conventional hedging. Say you’re a bread-making company. You have a 10% profit margin, and for the sake of simplicity, your only input cost is wheat. If the price of wheat goes up by 5%, you just lost half your profit. So you really want to hedge against wheat prices (you effectively have 10:1 leverage on wheat prices). But if you’re mission hedging and the mission target gets 5% worse, your marginal utility of money doesn’t increase by all that much.&lt;/p&gt;

&lt;p&gt;(Plus, wheat futures prices have a perfect correlation to future wheat prices, but no investment has a perfect correlation to AI progress.)&lt;/p&gt;

&lt;p&gt;Mission hedging only makes sense if your dollars become &lt;em&gt;far more valuable&lt;/em&gt; in the scenarios you’re trying to hedge against, and if the hedge asset earns &lt;em&gt;significant excess returns&lt;/em&gt; in those scenarios.&lt;/p&gt;

&lt;h2 id=&quot;objections-to-this-methodology&quot;&gt;Objections to this methodology&lt;/h2&gt;

&lt;p&gt;I can see (at least) two major, possibly-defeating objections to my approach:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Investing in a basket of AI companies (or some similar hedge asset) accelerates AI capabilities, which is bad.&lt;/li&gt;
  &lt;li&gt;Objective metrics like ML benchmark progress tell us little about progress toward AGI.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first argues against mission hedging. The second could argue against or in favor, depending on how you interpret it.&lt;/p&gt;

&lt;h3 id=&quot;impact-investing&quot;&gt;Impact investing&lt;/h3&gt;

&lt;p&gt;If you invest in a company, you might cause that company to perform better (and if you divest, you might cause it to perform worse). If true, this argues against mission hedging.&lt;/p&gt;

&lt;p&gt;Paul Christiano wrote an &lt;a href=&quot;https://sideways-view.com/2019/05/25/analyzing-divestment/&quot;&gt;analysis&lt;/a&gt; suggesting that divesting works somewhat well; Jonathan Harris did an (unpublished) calculation based on &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3440147&quot;&gt;Betermier, Calvet &amp;amp; Jo (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:52&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:52&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt; and estimated that divesting has a positive but negligible effect. I only weakly understand this area, but it’s at least plausible that the direct harm of investing outweighs the benefit of mission hedging.&lt;/p&gt;

&lt;h3 id=&quot;measurability-of-ai-progress&quot;&gt;Measurability of AI progress&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds?commentId=mKgEsfShs2xtaWz4K&quot;&gt;Gwern argues&lt;/a&gt; that easily-measurable AI progress benchmarks tell us approximately nothing useful about how close we are to AGI.&lt;/p&gt;

&lt;p&gt;This criticism applies to my approach of estimating AI progress volatility using &lt;a href=&quot;#ml-benchmark-performance&quot;&gt;ML benchmark performance&lt;/a&gt;. The way I measure AI progress doesn’t necessarily relate to the sort of progress we want to hedge, and the way I convert ML benchmark performance into marginal utility of money is kind of arbitrary and only weakly justified.&lt;/p&gt;

&lt;p&gt;If true, this doesn’t automatically defeat mission hedging. What it does mean is that our proxies for the volatility of AI progress don’t tell us anything about true volatility, and we have no way to estimate true volatility (if there even is such a thing).&lt;/p&gt;

&lt;p&gt;If we can’t reliably measure AI progress, that makes mission hedging seem intuitively less appealing (although I don’t quite know how to explain why). Even so, the &lt;em&gt;concept&lt;/em&gt; of mission hedging—make more money in worlds where AI progress moves faster than expected—seems sound. And Gwern’s criticism does not apply to using &lt;a href=&quot;#ai-timeline-forecasts&quot;&gt;AI timeline forecasts&lt;/a&gt; to project AI progress.&lt;/p&gt;

&lt;p&gt;And we don’t actually need our progress benchmarks to be all that similar to “actual” AI progress. We just need the two to correlate. What we need is a strong correlation of &lt;code&gt;hedge asset &amp;lt;&amp;gt; actual AI progress&lt;/code&gt;. Even if the &lt;code&gt;benchmark of AI progress &amp;lt;&amp;gt; actual AI progress&lt;/code&gt; correlation is weak, I don’t see why that would weaken the &lt;code&gt;hedge asset &amp;lt;&amp;gt; actual AI progress&lt;/code&gt; correlation in expectation. (On the contrary, it seems intuitive to me that AI company stock prices should correlate more strongly with actual progress than with benchmarks.)&lt;/p&gt;

&lt;h2 id=&quot;whats-the-strongest-case-in-favor-of-mission-hedging&quot;&gt;What’s the strongest case in favor of mission hedging?&lt;/h2&gt;

&lt;p&gt;Most of the time when a non-key input variable had wiggle room, I chose a value that favored mission hedging (as detailed in &lt;a href=&quot;#appendix-a-justifications-for-the-big-list-of-considerations&quot;&gt;Appendix A&lt;/a&gt;).&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt; Mission hedging still looked worse than investing in the mean-variance optimal portfolio under most key parameter values. Is there any room left to argue that we should mission hedge AI?&lt;/p&gt;

&lt;p&gt;Yes, there’s room. Below are four arguments in favor of higher investment in AI stocks.&lt;/p&gt;

&lt;h3 id=&quot;illegible-assumptions-could-bias-the-model-against-mission-hedging&quot;&gt;Illegible assumptions could bias the model against mission hedging&lt;/h3&gt;

&lt;p&gt;You could argue that my model’s assumptions implicitly favor ordinary investing. Most obviously, you could probably come up with a utility function that makes mission hedging look much better. I don’t currently know of any such function, but that seems like a key way that the model could be (very) wrong.&lt;/p&gt;

&lt;h3 id=&quot;the-volatility-of-ai-progress-could-increase&quot;&gt;The volatility of AI progress could increase&lt;/h3&gt;

&lt;p&gt;Before transformative AI arrives, we might reach a point where AI progress unexpectedly accelerates. If we hold a hedge position, we can earn potentially massive returns at this point, and then convert those returns into AI alignment work. However:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;This only works if transformative AI follows a slow takeoff. (&lt;a href=&quot;https://sideways-view.com/2018/02/24/takeoff-speeds/&quot;&gt;Paul Christiano&lt;/a&gt; believes it (probably) will; &lt;a href=&quot;https://www.lesswrong.com/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds&quot;&gt;Eliezer Yudkowsky&lt;/a&gt; believes it (probably) will not. See also &lt;a href=&quot;https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai&quot;&gt;Scott Alexander’s commentary&lt;/a&gt;.&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt;)&lt;/li&gt;
  &lt;li&gt;Volatility would have to increase significantly to justify mission hedging.&lt;/li&gt;
  &lt;li&gt;Volatility would have to increase well before AGI, so that would-be mission hedgers can earn extra money and still have time to spend it on AI alignment work.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When I tried to estimate the volatility of AI progress, two of my three estimates looked at historical data. Only the forecast method could capture future changes in volatility. So it’s plausible that volatility will spike in the future, and my analysis missed this because it was mostly backward-looking. (On the other hand, the forecast method provided the lowest estimated standard deviation.)&lt;/p&gt;

&lt;h3 id=&quot;investors-could-be-leverage-constrained&quot;&gt;Investors could be leverage-constrained&lt;/h3&gt;

&lt;p&gt;I assumed that investors are volatility-constrained but not leverage-constrained—I explain why in &lt;a href=&quot;#target-volatility-30&quot;&gt;Appendix A&lt;/a&gt;. However, it’s at least &lt;em&gt;plausible&lt;/em&gt; that many investors face meaningful leverage constraints. If so, then we care about maximizing the absolute return of our investments, rather than the risk-adjusted return. (An investor who can use leverage wants to pick the investment with the highest risk-adjusted return and then lever it up, which will always produce an equal or better absolute return than investing in the highest-absolute-return asset with no leverage.)&lt;/p&gt;

&lt;p&gt;Investors with strict leverage constraints should probably mission hedge on the margin. As I mentioned &lt;a href=&quot;#why-does-ordinary-investing-probably-look-better-than-mission-hedging&quot;&gt;above&lt;/a&gt;, the big advantage of the MVO asset over the hedge asset is that it earns a far better return once it’s levered up. If you don’t use leverage, the MVO asset doesn’t earn a much higher return&lt;sup id=&quot;fnref:50&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:50&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;—it does have much less risk, but that’s not as important for altruistic investors.&lt;/p&gt;

&lt;h3 id=&quot;ai-companies-could-beat-the-market&quot;&gt;AI companies could beat the market&lt;/h3&gt;

&lt;p&gt;Perhaps the strongest case for &lt;em&gt;investing&lt;/em&gt; in AI companies has nothing to do with &lt;em&gt;mission hedging&lt;/em&gt;. Namely, if you believe the market underestimates how quickly AI systems will improve, then you might invest in AI stocks so you can beat the market. I don’t have a strong opinion on whether this is a good idea, but it’s not mission hedging—it’s just traditional profit-seeking investing.&lt;/p&gt;

&lt;p&gt;Some specific numbers:&lt;/p&gt;

&lt;p&gt;If we invest in the global market portfolio and assume mission target volatility = 10%, r = 0.2 (which is relatively unfavorable for mission hedging), then the hedge asset only has to beat the stock market by 1 percentage point to look better than the MVO asset.&lt;/p&gt;

&lt;p&gt;If we invest in a factor portfolio with the same unfavorable assumptions for mission hedging, the hedge has to beat the stock market by 10 (!) percentage points.&lt;/p&gt;

&lt;p&gt;If we invest in a factor portfolio with &lt;em&gt;favorable&lt;/em&gt; assumptions for mission hedging, the hedge has to beat the market by 3 percentage points.&lt;/p&gt;

&lt;p&gt;If we set mission target volatility at 5%, there’s pretty much no way for the hedge asset to look favorable.&lt;/p&gt;

&lt;p&gt;In summary, if we expect AI companies to beat the market, then we may or may not prefer to invest in AI stocks on the margin, and it could easily go either way depending on what assumptions we make.&lt;/p&gt;

&lt;h2 id=&quot;credences&quot;&gt;Credences&lt;/h2&gt;

&lt;p&gt;In this section, I offer my subjective probabilities on the most relevant questions.&lt;/p&gt;

&lt;p&gt;All things considered, AI safety donors should not mission hedge on the margin: 70%&lt;/p&gt;

&lt;p&gt;All things considered, AI safety donors should not overweight AI stocks on the margin: 60%&lt;sup id=&quot;fnref:47&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:47&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;On the condition that I’ve thought of all the important considerations, AI safety donors should not mission hedge on the margin: 75%&lt;/p&gt;

&lt;p&gt;I have missed some critically-important consideration: 20%&lt;/p&gt;

&lt;p&gt;…and that consideration makes mission hedging look better: 10%&lt;/p&gt;

&lt;p&gt;A utility function with linear marginal utility of AI progress overestimates the marginal value of mission hedging: 75%&lt;/p&gt;

&lt;p&gt;The true volatility of AI progress, inasmuch as that’s a well-defined concept, is at most 20%: 75%&lt;/p&gt;

&lt;p&gt;The maximum achievable correlation for a hedge asset with non-negative alpha&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt; is at most 0.5: 85%&lt;/p&gt;

&lt;p&gt;On balance, the non-key parameters’ given values make mission hedging look better than their true values would: 90%&lt;sup id=&quot;fnref:31&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:31&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;On balance, the non-key parameters’ given values make investing in AI stocks look better than their true values would: 60%&lt;/p&gt;

&lt;h2 id=&quot;future-work&quot;&gt;Future work&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Better conceptual approach.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Is there a better way to model the (potentially) binary nature of AI takeoff?&lt;/p&gt;

&lt;p&gt;If AI progress is fat-tailed (or, technically, fatter-tailed than a log-normal distribution), how does that change things?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Reconcile assumptions that were favorable and unfavorable to mission hedging.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most assumptions in &lt;a href=&quot;#appendix-a-justifications-for-the-big-list-of-considerations&quot;&gt;Appendix A&lt;/a&gt; lean toward mission hedging. But one big assumption—that the hedge asset &lt;a href=&quot;#arithmetic-return-of-the-hedge-asset-7&quot;&gt;won’t outperform the market&lt;/a&gt;—might strongly disfavor mission hedging if it turns out the hedge can beat the market. Which force is stronger: the collection of small tilts toward mission hedging, or the (potential) single big tilt against it? How can we find out without introducing way too many free variables into the model?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Does investing in AI companies accelerate AI progress?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If yes, that could be a compelling reason not to mission hedge, although this would only strengthen my conclusion because I already found that mission hedging doesn’t look worthwhile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Deeper investigation on empirical values.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I got all my data on ML benchmark performance from EFF’s &lt;a href=&quot;https://www.eff.org/ai/metrics&quot;&gt;AI Progress Measurement&lt;/a&gt;. We could look at more data from other sources, such as &lt;a href=&quot;https://paperswithcode.com/sota&quot;&gt;Papers With Code SOTA&lt;/a&gt;. (&lt;a href=&quot;https://github.com/paperswithcode/sota-extractor&quot;&gt;This GitHub project&lt;/a&gt; scrapes AI progress data from several sources.)&lt;/p&gt;

&lt;p&gt;My forecast data used only a few questions from Metaculus plus a single survey. I picked those because they seemed like the best sources, but a thorough analysis could incorporate more forecasts. Maybe combine forecasts from multiple surveys, such as the ones &lt;a href=&quot;https://aiimpacts.org/some-survey-results/&quot;&gt;collected by AI Impacts&lt;/a&gt;. (Although I don’t think a broader set of forecasts would change the ultimate outcome much.)&lt;/p&gt;

&lt;p&gt;Alternatively, we could project the volatility of AI progress using Ajeya Cotra’s biological anchors report (see &lt;a href=&quot;https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP&quot;&gt;Google Drive&lt;/a&gt;, &lt;a href=&quot;https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/&quot;&gt;Holden’s summary&lt;/a&gt;). I didn’t do this because it seemed less relevant that my other approaches and I couldn’t easily see how to convert biological anchors into a measurable mission target, but it might be worth looking into.&lt;/p&gt;

&lt;h1 id=&quot;source-code&quot;&gt;Source code&lt;/h1&gt;

&lt;p&gt;Source code for estimating AI progress is available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/tree/master/ai-progress&quot;&gt;on GitHub&lt;/a&gt;, as is source code for &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/MissionHedging.hs&quot;&gt;deriving optimal mission hedging&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;acknowledgments&quot;&gt;Acknowledgments&lt;/h1&gt;

&lt;p&gt;Thanks to three anonymous financial professionals for reviewing drafts of this essay.&lt;/p&gt;

&lt;h1 id=&quot;appendices&quot;&gt;Appendices&lt;/h1&gt;
&lt;h2 id=&quot;appendix-a-justifications-for-the-big-list-of-considerations&quot;&gt;Appendix A: Justifications for the big list of considerations&lt;/h2&gt;

&lt;h3 id=&quot;shape-of-the-utility-function-with-respect-to-wealth-bounded-constant-relative-risk-aversion&quot;&gt;Shape of the utility function with respect to wealth: bounded constant relative risk aversion&lt;/h3&gt;

&lt;p&gt;In my &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/#the-utility-function&quot;&gt;previous essay&lt;/a&gt;, I set forth a list of requirements for a utility function over wealth and the mission target. Specifically with respect to wealth, I wanted a utility function that was (1) bounded above and (2) had constant relative risk aversion. I explained my reasoning &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/#the-utility-function&quot;&gt;in that essay&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;coefficient-of-relative-risk-aversion-2&quot;&gt;Coefficient of relative risk aversion: 2&lt;/h3&gt;

&lt;p&gt;People in the EA community tend to believe that the value of wealth is logarithmic,&lt;sup id=&quot;fnref:41&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:41&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;26&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:42&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:42&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt; which would correspond to a risk aversion of 1. But a logarithmic function is unbounded. If our goal is (solely) to maximize the probability that transformative AI goes well, and if utility is linear with this probability, then utility must be bounded.&lt;/p&gt;

&lt;p&gt;We could pick a coefficient of risk aversion that gives a nearly-logarithmic utility function, say, 1.1. That gives a function that looks logarithmic for a while, but starts to flatten out as we get close to the upper bound.&lt;/p&gt;

&lt;p&gt;But I find it plausible that the coefficient should be as high as 2, which corresponds to the utility function \(U(W) = 1 - \frac{1}{W}\).&lt;sup id=&quot;fnref:38&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:38&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;28&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;(You could also argue that risk aversion should be less than 1, but I do not consider that possibility in this essay.)&lt;/p&gt;

&lt;p&gt;Lower risk aversion makes the MVO asset look better. It doesn’t change the long-run optimal ratio of mission hedging to MVO investing, but it does make the MVO asset look relatively better on the margin because we become more “return-starved”—with lower risk aversion, we place higher value of earning as much return as possible.&lt;/p&gt;

&lt;p&gt;I chose a risk aversion of 2 to be generous toward mission hedging.&lt;/p&gt;

&lt;h3 id=&quot;shape-of-the-distribution-of-investment-returns-log-normal&quot;&gt;Shape of the distribution of investment returns: log-normal&lt;/h3&gt;

&lt;p&gt;It’s standard in academic finance to assume that asset prices follow log-normal distributions. It’s not true—bad outcomes are more likely than a log-normal distribution would suggest—but it’s sort of close to true, and it makes the math simple.&lt;/p&gt;

&lt;p&gt;I don’t know how things would change if I used a more accurate distribution. All three possible investments would look worse, but the real question is whether any one would look worse relative to the others.&lt;/p&gt;

&lt;p&gt;Some investments are right-skewed, such as options, &lt;a href=&quot;https://alphaarchitect.com/2017/01/go-skew-yourself-with-managed-futures/&quot;&gt;managed futures&lt;/a&gt;, and maybe whatever it is &lt;a href=&quot;https://www.alameda-research.com/&quot;&gt;Alameda Research&lt;/a&gt; does. If the MVO asset is right-skewed (or at least less left-skewed than the market), then using accurate distributions could favor the MVO asset.&lt;/p&gt;

&lt;p&gt;However, AI progress might be right-skewed if a huge amount of expected value comes from unexpectedly fast progress on the right tail.&lt;sup id=&quot;fnref:43&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:43&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;29&lt;/a&gt;&lt;/sup&gt; In that case, the hedge asset might have a stronger right skew than the MVO asset. On the other hand, if AI capabilities improve that quickly, it might not be possible to convert money into AI alignment in the limited time available.&lt;/p&gt;

&lt;h3 id=&quot;shape-of-the-distribution-of-ai-progress-log-normal&quot;&gt;Shape of the distribution of AI progress: log-normal&lt;/h3&gt;

&lt;p&gt;When I was first thinking about writing this essay, I was having a casual conversation about the subject with a friend. I asked him what he thought was the shape of the distribution of AI progress. He said, “Isn’t everything log-normal?” And I was already thinking along those lines, so I went with it.&lt;/p&gt;

&lt;p&gt;(If I assume AI progress is log-normally distributed, that makes it easy to calculate the correlated growth of AI progress and the various investments.)&lt;/p&gt;

&lt;p&gt;One could reasonably object to this—see &lt;a href=&quot;#measurability-of-ai-progress&quot;&gt;Measurability of AI progress&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;arithmetic-return-of-the-mvo-asset-5-gmp-or-9-fp&quot;&gt;Arithmetic return of the MVO asset: 5% (GMP) or 9% (FP)&lt;/h3&gt;

&lt;p&gt;For a factor portfolio, I estimated the future return and standard deviation as described in a &lt;a href=&quot;https://mdickens.me/2022/04/01/how_I_estimate_future_investment_returns/&quot;&gt;How I Estimate Future Investment Returns&lt;/a&gt;. I started from a 6% real geometric return and added projected inflation and &lt;a href=&quot;https://en.wikipedia.org/wiki/Inequality_of_arithmetic_and_geometric_means&quot;&gt;volatility drag&lt;/a&gt;&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;30&lt;/a&gt;&lt;/sup&gt;, giving a 9% nominal arithmetic return.&lt;/p&gt;

&lt;p&gt;For the global market portfolio, I assumed that current valuations can’t predict future returns. I started from &lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/2022-Capital-Market-Assumptions-for-Major-Asset-Classes&quot;&gt;AQR’s 2022 return projections&lt;/a&gt; (which use the &lt;a href=&quot;https://www.investopedia.com/terms/d/ddm.asp&quot;&gt;dividend discount model&lt;/a&gt;) and ran a mean-variance optimizer over global equities and global bonds. The result had a 5% return and 9% standard deviation.&lt;/p&gt;

&lt;p&gt;I estimated future volatility using historical volatility, as described in my &lt;a href=&quot;https://mdickens.me/2022/04/01/how_I_estimate_future_investment_returns/&quot;&gt;aforementioned post&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;arithmetic-return-of-the-legacy-asset-7&quot;&gt;Arithmetic return of the legacy asset: 7%&lt;/h3&gt;

&lt;p&gt;In general, we should expect any randomly-chosen asset to have the same arithmetic return as its index. If most legacy investments are US companies, we can use the US stock market as a baseline. The dividend discount model predicts the US market to earn a 7% nominal return. I would predict a lower return than this because US stocks are so expensive right now, but changing this number doesn’t change the relative appeal of ordinary investing vs. mission hedging (unless you make it much higher).&lt;/p&gt;

&lt;h3 id=&quot;arithmetic-return-of-the-hedge-asset-7&quot;&gt;Arithmetic return of the hedge asset: 7%&lt;/h3&gt;

&lt;p&gt;As with the legacy asset, we should (generally speaking) expect the hedge asset to earn the same return as its index.&lt;/p&gt;

&lt;p&gt;That said, if you’re concerned about AI safety, you might also believe that the market underestimates the future rate of AI progress.&lt;sup id=&quot;fnref:49&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:49&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;31&lt;/a&gt;&lt;/sup&gt; If you &lt;em&gt;also&lt;/em&gt; expect a &lt;a href=&quot;https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai&quot;&gt;slow takeoff&lt;/a&gt;, and you expect that AI companies will earn large profits during this takeoff, then you probably expect AI stocks to beat the market.&lt;/p&gt;

&lt;p&gt;If you overweight AI stocks because you expect them to beat the market, that’s not mission hedging, it’s just ordinary profit-seeking investing. Your mean-variance optimal portfolio should include more AI stocks, but you shouldn’t increase your allocation to AI stocks &lt;em&gt;as a hedge&lt;/em&gt;. Still, you would increase your total allocation to AI companies.&lt;/p&gt;

&lt;p&gt;See &lt;a href=&quot;#ai-companies-could-beat-the-market&quot;&gt;“AI companies could beat the market”&lt;/a&gt; for more on this.&lt;/p&gt;

&lt;h3 id=&quot;volatility-of-the-mvo-asset-9-gmp-or-13-fp&quot;&gt;Volatility of the MVO asset: 9% (GMP) or 13% (FP)&lt;/h3&gt;

&lt;p&gt;As described in &lt;a href=&quot;https://mdickens.me/2022/04/01/how_I_estimate_future_investment_returns/&quot;&gt;How I Estimate Future Investment Returns&lt;/a&gt;, I set future volatility equal to historical average volatility because there’s no compelling reason to expect it to be systematically higher or lower.&lt;/p&gt;

&lt;h3 id=&quot;volatility-of-the-legacy-asset-30&quot;&gt;Volatility of the legacy asset: 30%&lt;/h3&gt;

&lt;p&gt;Large-cap stocks tend to have standard deviations of around 40%, and mega-cap stocks have standard deviations around 30%.&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;32&lt;/a&gt;&lt;/sup&gt; For more on this, see &lt;a href=&quot;https://mdickens.me/2020/10/18/risk_of_concentrating/&quot;&gt;The Risk of Concentrating Wealth in a Single Asset&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Arguably, the thing we care about isn’t the volatility of a single legacy asset, but the volatility of all EAs’ legacy assets put together. But which assets to include depends on exactly which other donors you care about. This doesn’t matter too much because the volatility of the legacy asset does not meaningfully affect the tradeoff between the MVO asset and the hedge asset.&lt;/p&gt;

&lt;h3 id=&quot;volatility-of-the-hedge-asset-25&quot;&gt;Volatility of the hedge asset: 25%&lt;/h3&gt;

&lt;p&gt;Empirically, smallish baskets of randomly-chosen stocks are a bit less volatile than this (~20%, see Fact 2 &lt;a href=&quot;https://mdickens.me/2021/10/29/obvious_investing_facts/&quot;&gt;here&lt;/a&gt;), but if the stocks are chosen to be correlated to a particular target, the basket will be more volatile because it’s less diversified. 25% is consistent with the standard deviations of returns for individual industries. Market-neutral industry baskets (i.e., buy the industry and short the market) have standard deviations around 20%.&lt;/p&gt;

&lt;h3 id=&quot;correlation-of-mvo-asset--legacy-asset-06-gmp-or-05-fp&quot;&gt;Correlation of MVO asset &amp;lt;&amp;gt; legacy asset: 0.6 (GMP) or 0.5 (FP)&lt;/h3&gt;

&lt;p&gt;We can roughly assume that future correlations will resemble historical correlations. It’s easy to get historical data on a basic stock/bond portfolio or on a factor portfolio. It’s a little harder to find the correlation with the legacy asset because we don’t exactly know what the legacy asset is, and even if we did, it probably doesn’t have more than a decade or two of performance history.&lt;/p&gt;

&lt;p&gt;As an approximation, I took the 30 industry portfolios from the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt;, on the assumption that the correlation between the average industry and an investment strategy is &lt;em&gt;roughly&lt;/em&gt; the same as the correlation between the legacy asset and that same investment strategy.&lt;/p&gt;

&lt;p&gt;I also used industry portfolios as proxies for the hedge asset, because a hedge would consist of a collection of related stocks in the same way that an industry is a collection of related stocks.&lt;/p&gt;

&lt;p&gt;For this test, I simulated the global market portfolio with global stocks + bonds. I simulated a factor portfolio using &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;concentrated&lt;/a&gt; long-only value and momentum factor portfolios plus a smaller short market position plus actual net historical performance of the &lt;a href=&quot;http://www.chesapeakecapital.com/performance/&quot;&gt;Chesapeake managed futures strategy&lt;/a&gt; (which I used because it provides a longer track record than any other real-world managed futures fund that I’m aware of).&lt;sup id=&quot;fnref:45&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:45&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;33&lt;/a&gt;&lt;/sup&gt; This loosely resembles how I invest my own money. Alternatively, I could have constructed a factor portfolio that combines multiple long/short factors plus market beta, which would more closely resemble academic factor definitions. That would probably scale better for extremely wealthy investors, and would give lower correlations but with higher expenses/transaction costs. &lt;a href=&quot;https://www.aqr.com/Strategies&quot;&gt;AQR&lt;/a&gt; is one firm that offers diversified factor strategies that are designed to scale well for large investors.&lt;/p&gt;

&lt;p&gt;(If we just used the historical returns of factor strategies to estimate future performance, we’d probably tend to overestimate future returns; but for right now we care about correlation, not return, so that doesn’t really matter.)&lt;/p&gt;

&lt;p&gt;I calculated the correlations between the simulated MVO asset and each individual industry. Then I looked at the mean, median, and market-cap-weighted mean of the correlations, rounded to the nearest tenth. The mean, median, and weighted mean were all close enough together that they round to the same number. Using the global market portfolio (GMP) as the MVO asset produced an average correlation of 0.6, and using the factor portfolio (FP) gave an average of 0.5.&lt;/p&gt;

&lt;h3 id=&quot;correlation-of-mvo-asset--hedge-asset-06-gmp-or-05-fp&quot;&gt;Correlation of MVO asset &amp;lt;&amp;gt; hedge asset: 0.6 (GMP) or 0.5 (FP)&lt;/h3&gt;

&lt;p&gt;See previous section.&lt;/p&gt;

&lt;h3 id=&quot;correlation-of-legacy-asset--hedge-asset-07&quot;&gt;Correlation of legacy asset &amp;lt;&amp;gt; hedge asset: 0.7&lt;/h3&gt;

&lt;p&gt;The total stock market has an average correlation of a little under 0.7 to any individual industry. The correlation between the legacy asset and the hedge asset is probably about the same—maybe a bit higher, since EAs tend to overweight tech/AI stocks.&lt;/p&gt;

&lt;h3 id=&quot;correlation-of-mvo-asset--ai-progress-00-gmp-or-01-fp&quot;&gt;Correlation of MVO asset &amp;lt;&amp;gt; AI progress: 0.0 (GMP) or –0.1 (FP)&lt;/h3&gt;

&lt;p&gt;As show &lt;a href=&quot;#semiconductor-index--ml-model-performance&quot;&gt;above&lt;/a&gt;, I looked at the correlation between the S&amp;amp;P 500 and ML benchmark performance, and found a correlation of about 0.2 (depending on what benchmark I use). The global market portfolio has a somewhat lower correlation, at 0.0 (it’s slightly positive but it rounds off to 0). A factor portfolio has a correlation of roughly –0.1. (Remember that a higher correlation with AI progress is better.)&lt;/p&gt;

&lt;p&gt;Caveat: These correlations are unstable. I got these coefficients using the largest date range available (2008–2021), but if I change the date range, I can get substantially different numbers. (For example, if we look at 2012–2021 instead, the S&amp;amp;P 500 now has a negative correlation to AI progress.) So consider these estimates not particularly reliable. But they should be at least directionally correct—a hedge has a higher correlation than the global market portfolio, which has a higher correlation than a factor portfolio.&lt;/p&gt;

&lt;h3 id=&quot;correlation-of-legacy-asset--ai-progress-02&quot;&gt;Correlation of legacy asset &amp;lt;&amp;gt; AI progress: 0.2&lt;/h3&gt;

&lt;p&gt;The legacy asset has a slightly higher correlation to AI progress than the S&amp;amp;P 500 does because the legacy asset is tilted toward AI or AI-adjacent stocks. I don’t have the data necessary to empirically estimate the correlation, but I would guess that it’s about 0.2. (And, as I said in the previous section, and empirical estimate wouldn’t be very reliable anyway.)&lt;/p&gt;

&lt;h3 id=&quot;impact-investing-factor-ignored&quot;&gt;Impact investing factor: ignored&lt;/h3&gt;

&lt;p&gt;Discussed &lt;a href=&quot;#impact-investing&quot;&gt;above&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this essay, I simply assume that investing in a company’s stock has no direct effect on the company. That said, a philanthropist who’s leaning toward mission hedging should first carefully consider the effect of impact investing.&lt;/p&gt;

&lt;h3 id=&quot;target-volatility-30&quot;&gt;Target volatility: 30%&lt;/h3&gt;

&lt;p&gt;Before we ask what the volatility target should be, first we should ask: do we even want a volatility target?&lt;/p&gt;

&lt;p&gt;Many investors behave as if they’re leverage-constrained but not volatility-constrained—they invest in extremely risky assets such as cryptocurrencies or individual stocks, but not in (relatively) stable leveraged investments such as &lt;a href=&quot;https://en.wikipedia.org/wiki/Risk_parity&quot;&gt;risk parity&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It’s true that &lt;em&gt;some&lt;/em&gt; investors can’t use leverage; and some investors can use up to 2:1 leverage with &lt;a href=&quot;https://www.investopedia.com/terms/r/regulationt.asp&quot;&gt;Reg T margin&lt;/a&gt;, but not more than that (&lt;a href=&quot;https://www.investopedia.com/terms/p/portfolio-margin.asp&quot;&gt;portfolio margin&lt;/a&gt; allows greater than 2:1 leverage). However:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Wealthier investors can get access to portfolio margin fairly easily, and these investors represent the majority of EA wealth.&lt;/li&gt;
  &lt;li&gt;Investors who cannot use leverage themselves (or only limited leverage) can still invest in &lt;em&gt;funds&lt;/em&gt; that use leverage internally.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So we should not treat leverage as a constraint. That said, if we &lt;em&gt;are&lt;/em&gt; leverage-constrained, that means we should invest in assets with particularly high volatility (as long as we’re compensated for that volatility with extra expected return). This argues in favor of mission hedging because a focused hedge portfolio will have higher volatility than the market. Given a leverage constraint, mission hedging looks better than ordinary investing given most reasonable parameter values.&lt;/p&gt;

&lt;p&gt;Okay, given that we want a volatility target, what number should we shoot for?&lt;/p&gt;

&lt;p&gt;I somewhat-arbitrarily chose a 30% standard deviation as the target. It’s a nice round number, and it’s roughly twice the standard deviation of the global stock market. And you can’t go too much higher than 30% without running a serious risk of going bust.&lt;/p&gt;

&lt;p&gt;Historically, if you had invested in a well-diversified portfolio at 30% target volatility, you would have experienced at worst an ~85% drawdown during the Great Depression and a ~70% drawdown in the 2008 recession.&lt;/p&gt;

&lt;p&gt;Below, I present a table of historical max drawdowns for a variety of assets and strategies, levered up to 30% volatility.&lt;sup id=&quot;fnref:48&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:48&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;34&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Asset Class&lt;/th&gt;
      &lt;th&gt;Sample Range&lt;/th&gt;
      &lt;th&gt;Leverage&lt;/th&gt;
      &lt;th&gt;Max DD&lt;/th&gt;
      &lt;th&gt;DD Range&lt;/th&gt;
      &lt;th&gt;Worst Month&lt;/th&gt;
      &lt;th&gt;Max Leverage&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US Market&lt;/td&gt;
      &lt;td&gt;1927–2021&lt;/td&gt;
      &lt;td&gt;1.6&lt;/td&gt;
      &lt;td&gt;96.1%&lt;/td&gt;
      &lt;td&gt;1929–1932&lt;/td&gt;
      &lt;td&gt;-29.1%&lt;/td&gt;
      &lt;td&gt;-3.4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US HML&lt;/td&gt;
      &lt;td&gt;1927–2021&lt;/td&gt;
      &lt;td&gt;2.5&lt;/td&gt;
      &lt;td&gt;94.2%&lt;/td&gt;
      &lt;td&gt;2006–2020&lt;/td&gt;
      &lt;td&gt;-16.8%&lt;/td&gt;
      &lt;td&gt;-6&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US SMB&lt;/td&gt;
      &lt;td&gt;1927–2021&lt;/td&gt;
      &lt;td&gt;2.7&lt;/td&gt;
      &lt;td&gt;94.6%&lt;/td&gt;
      &lt;td&gt;1983–1999&lt;/td&gt;
      &lt;td&gt;-14.0%&lt;/td&gt;
      &lt;td&gt;-7.1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US UMD&lt;/td&gt;
      &lt;td&gt;1927–2021&lt;/td&gt;
      &lt;td&gt;1.8&lt;/td&gt;
      &lt;td&gt;99.4%&lt;/td&gt;
      &lt;td&gt;1932–1939&lt;/td&gt;
      &lt;td&gt;-52.3%&lt;/td&gt;
      &lt;td&gt;-1.9&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US Combo&lt;/td&gt;
      &lt;td&gt;1927–2021&lt;/td&gt;
      &lt;td&gt;4.2&lt;/td&gt;
      &lt;td&gt;83.2%&lt;/td&gt;
      &lt;td&gt;1937–1940&lt;/td&gt;
      &lt;td&gt;-8.7%&lt;/td&gt;
      &lt;td&gt;-11.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US Market&lt;/td&gt;
      &lt;td&gt;1940–2021&lt;/td&gt;
      &lt;td&gt;2.0&lt;/td&gt;
      &lt;td&gt;81.1%&lt;/td&gt;
      &lt;td&gt;2000–2009&lt;/td&gt;
      &lt;td&gt;-22.6%&lt;/td&gt;
      &lt;td&gt;-4.4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US HML&lt;/td&gt;
      &lt;td&gt;1940–2021&lt;/td&gt;
      &lt;td&gt;3.1&lt;/td&gt;
      &lt;td&gt;96.9%&lt;/td&gt;
      &lt;td&gt;1983–2000&lt;/td&gt;
      &lt;td&gt;-16.8%&lt;/td&gt;
      &lt;td&gt;-6&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US SMB&lt;/td&gt;
      &lt;td&gt;1940–2021&lt;/td&gt;
      &lt;td&gt;3.2&lt;/td&gt;
      &lt;td&gt;97.8%&lt;/td&gt;
      &lt;td&gt;2006–2020&lt;/td&gt;
      &lt;td&gt;-14.0%&lt;/td&gt;
      &lt;td&gt;-7.1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US UMD&lt;/td&gt;
      &lt;td&gt;1940–2021&lt;/td&gt;
      &lt;td&gt;2.3&lt;/td&gt;
      &lt;td&gt;93.3%&lt;/td&gt;
      &lt;td&gt;2002–2010&lt;/td&gt;
      &lt;td&gt;-34.4%&lt;/td&gt;
      &lt;td&gt;-2.9&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US Combo&lt;/td&gt;
      &lt;td&gt;1940–2021&lt;/td&gt;
      &lt;td&gt;5.2&lt;/td&gt;
      &lt;td&gt;80.7%&lt;/td&gt;
      &lt;td&gt;2007–2011&lt;/td&gt;
      &lt;td&gt;-8.4%&lt;/td&gt;
      &lt;td&gt;-11.9&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10Y Treasuries&lt;/td&gt;
      &lt;td&gt;1947–2018&lt;/td&gt;
      &lt;td&gt;4.2&lt;/td&gt;
      &lt;td&gt;95.4%&lt;/td&gt;
      &lt;td&gt;1950–1981&lt;/td&gt;
      &lt;td&gt;-7.9%&lt;/td&gt;
      &lt;td&gt;-12.7&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Commodities&lt;/td&gt;
      &lt;td&gt;1878–2020&lt;/td&gt;
      &lt;td&gt;1.7&lt;/td&gt;
      &lt;td&gt;98.8%&lt;/td&gt;
      &lt;td&gt;1919–1933&lt;/td&gt;
      &lt;td&gt;-20.9%&lt;/td&gt;
      &lt;td&gt;-4.8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;AQR TSMOM&lt;/td&gt;
      &lt;td&gt;1985–2020&lt;/td&gt;
      &lt;td&gt;2.4&lt;/td&gt;
      &lt;td&gt;56.3%&lt;/td&gt;
      &lt;td&gt;2016–2020&lt;/td&gt;
      &lt;td&gt;-10.6%&lt;/td&gt;
      &lt;td&gt;-9.4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Dev Market&lt;/td&gt;
      &lt;td&gt;1991–2021&lt;/td&gt;
      &lt;td&gt;1.8&lt;/td&gt;
      &lt;td&gt;80.1%&lt;/td&gt;
      &lt;td&gt;2007–2009&lt;/td&gt;
      &lt;td&gt;-21.0%&lt;/td&gt;
      &lt;td&gt;-4.8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Dev SMB&lt;/td&gt;
      &lt;td&gt;1991–2021&lt;/td&gt;
      &lt;td&gt;4.2&lt;/td&gt;
      &lt;td&gt;91.3%&lt;/td&gt;
      &lt;td&gt;1990–2020&lt;/td&gt;
      &lt;td&gt;-5.8%&lt;/td&gt;
      &lt;td&gt;-17.2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Dev HML&lt;/td&gt;
      &lt;td&gt;1991–2021&lt;/td&gt;
      &lt;td&gt;3.7&lt;/td&gt;
      &lt;td&gt;87.9%&lt;/td&gt;
      &lt;td&gt;2009–2020&lt;/td&gt;
      &lt;td&gt;-10.9%&lt;/td&gt;
      &lt;td&gt;-9.2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Dev UMD&lt;/td&gt;
      &lt;td&gt;1991–2021&lt;/td&gt;
      &lt;td&gt;2.5&lt;/td&gt;
      &lt;td&gt;79.1%&lt;/td&gt;
      &lt;td&gt;2009–2009&lt;/td&gt;
      &lt;td&gt;-22.5%&lt;/td&gt;
      &lt;td&gt;-4.4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Dev Combo&lt;/td&gt;
      &lt;td&gt;1991–2021&lt;/td&gt;
      &lt;td&gt;6.1&lt;/td&gt;
      &lt;td&gt;67.1%&lt;/td&gt;
      &lt;td&gt;2008–2009&lt;/td&gt;
      &lt;td&gt;-4.2%&lt;/td&gt;
      &lt;td&gt;-23.8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US Market&lt;/td&gt;
      &lt;td&gt;1871–1926&lt;/td&gt;
      &lt;td&gt;2.7&lt;/td&gt;
      &lt;td&gt;71.0%&lt;/td&gt;
      &lt;td&gt;1906–1907&lt;/td&gt;
      &lt;td&gt;-10.2%&lt;/td&gt;
      &lt;td&gt;-9.8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US HML&lt;/td&gt;
      &lt;td&gt;1871–1926&lt;/td&gt;
      &lt;td&gt;1.8&lt;/td&gt;
      &lt;td&gt;87.4%&lt;/td&gt;
      &lt;td&gt;1884–1891&lt;/td&gt;
      &lt;td&gt;-21.0%&lt;/td&gt;
      &lt;td&gt;-4.8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US UMD&lt;/td&gt;
      &lt;td&gt;1871–1926&lt;/td&gt;
      &lt;td&gt;2.3&lt;/td&gt;
      &lt;td&gt;94.9%&lt;/td&gt;
      &lt;td&gt;1890–1904&lt;/td&gt;
      &lt;td&gt;-14.6%&lt;/td&gt;
      &lt;td&gt;-6.8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US Combo&lt;/td&gt;
      &lt;td&gt;1871–1926&lt;/td&gt;
      &lt;td&gt;3.9&lt;/td&gt;
      &lt;td&gt;67.9%&lt;/td&gt;
      &lt;td&gt;1874–1877&lt;/td&gt;
      &lt;td&gt;-7.8%&lt;/td&gt;
      &lt;td&gt;-12.8&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Column definitions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“Leverage”: Amount of leverage required to hit 30% volatility.&lt;/li&gt;
  &lt;li&gt;“Max DD”: Maximum drawdown that this asset class would have experienced at the given level of leverage.&lt;/li&gt;
  &lt;li&gt;“DD Range”: Peak-to-trough date range of the maximum drawdown.&lt;/li&gt;
  &lt;li&gt;“Max Leverage”: Maximum amount of leverage that would not have experienced a 100% drawdown.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Data series definitions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;US = United States&lt;/li&gt;
  &lt;li&gt;Dev = developed markets, excluding the United States&lt;/li&gt;
  &lt;li&gt;Market = total stock market&lt;/li&gt;
  &lt;li&gt;HML = long/short value factor (“high minus low” book-to-market ratio)&lt;/li&gt;
  &lt;li&gt;SMB = long/short size factor (“small minus big”)&lt;/li&gt;
  &lt;li&gt;UMD = long/short momentum factor (“up minus down”)&lt;/li&gt;
  &lt;li&gt;Combo = volatility-weighted combination of Market, HML, and UMD&lt;/li&gt;
  &lt;li&gt;TSMOM = time series momentum (a.k.a. trendfollowing)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;General notes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;These portfolios use a constant amount of leverage that produces an average 30% standard deviation over the whole period. This is a little unfair because we wouldn’t have known in advance how much leverage to use. At the same time, recent volatility can somewhat predict short-term future volatility, so we could adjust leverage as expectations change instead of using fixed leverage over the whole period.&lt;/li&gt;
  &lt;li&gt;Data on historical performance is pulled from the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt; (for most data), &lt;a href=&quot;https://som.yale.edu/centers/international-center-for-finance/data/historical-financial-research-data/cowlesdata&quot;&gt;Cowles Commission for Research in Economics&lt;/a&gt; (for 1871–1926 data), &lt;a href=&quot;https://www.aqr.com/Insights/Datasets&quot;&gt;AQR data sets&lt;/a&gt; (for TSMOM and commodities), and &lt;a href=&quot;https://eur.figshare.com/articles/Data_Treasury_Bond_Return_Data_Starting_in_1962/8152748&quot;&gt;Swinkels (2019), “Data: International government bond returns since 1947”&lt;/a&gt; (for 10Y Treasuries).&lt;/li&gt;
  &lt;li&gt;All are gross of management fees and transaction costs, except for AQR TSMOM which is net of estimated fees and costs.&lt;/li&gt;
  &lt;li&gt;I assume leverage costs the risk-free rate plus 1%, and short positions earn the risk-free rate.&lt;/li&gt;
  &lt;li&gt;Leverage is rebalanced monthly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;cost-of-leverage-2&quot;&gt;Cost of leverage: 2%&lt;/h3&gt;

&lt;p&gt;As of 2022-05-18, Interactive Brokers &lt;a href=&quot;https://www.interactivebrokers.com/en/trading/margin-rates.php&quot;&gt;charges&lt;/a&gt; 1.83% for leverage on amounts over $100,000, and 1.58% on amounts over $1 million. 2% assumes rates will go up in the future.&lt;/p&gt;

&lt;p&gt;(If rates go up, bond yields should also go up, which means their prices will decline in the short run and earn higher returns in the long run.&lt;sup id=&quot;fnref:36&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;35&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;h3 id=&quot;proportion-of-portfolio-currently-invested-in-mvolegacyhedge-assets-50--50--0&quot;&gt;Proportion of portfolio currently invested in MVO/legacy/hedge assets: 50% / 50% / 0%&lt;/h3&gt;

&lt;p&gt;I roughly estimate that:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;50% of EA funding comes from money in a small number of legacy assets, mostly held by a few big funders.&lt;/li&gt;
  &lt;li&gt;50% comes out of other investments, or out of human capital (people donate as they earn, or take decreased salaries to do direct work), and almost nobody uses leverage.&lt;/li&gt;
  &lt;li&gt;Almost nobody mission hedges AI.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A couple of issues with these assumptions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Even excluding legacy investments, the EA portfolio probably does not look much like the mean-variance optimal portfolio.&lt;/li&gt;
  &lt;li&gt;Even if almost nobody mission hedges AI, some EAs overweight AI stocks for other reasons.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I think these numbers excessively favor mission hedging, but I chose them because they’re simple.&lt;/p&gt;

&lt;p&gt;If we did want to model diversified-but-suboptimal investments, we could treat the legacy asset as including all suboptimal investments, and change the numbers accordingly. I will leave that as an exercise.&lt;/p&gt;

&lt;h2 id=&quot;appendix-b-ai-progress-by-model-size-growth&quot;&gt;Appendix B: AI progress by model size growth&lt;/h2&gt;

&lt;p&gt;In this appendix, I explain how I approximated AI progress using the growth in the sizes of large ML models.&lt;/p&gt;

&lt;p&gt;I looked at the growth rate in the size (measured by training FLOPS or number of parameters) of large ML models. Then I used known relationships between model size and performance to estimate the growth in performance.&lt;/p&gt;

&lt;p&gt;I ended up not including this in the main essay because why should I try to approximate performance in terms of model size when I have a data set that lets me measure performance directly? But I’m including it in the appendix for the sake of completeness, and because it produced the highest volatility estimate of any approach I used.&lt;/p&gt;

&lt;p&gt;I downloaded the &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1AAIebjNsnJj_uKALHbXNfn3_YsT6sHXtCU0q7OIPuc4/edit#gid=0&quot;&gt;data&lt;/a&gt; from &lt;a href=&quot;https://arxiv.org/abs/2202.05924&quot;&gt;Sevilla et al. (2022)&lt;/a&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;36&lt;/a&gt;&lt;/sup&gt;, which gives compute (FLOPS) and number of parameters for a collection of published ML models, organized by date. I found the average compute/params per year and used that to get the growth rate for each year, and took the standard deviation of the growth rates. I only included ML models from 2012 to 2021 because earlier years had too few samples.&lt;/p&gt;

&lt;p&gt;Model size isn’t directly what we care about. What we care about is model performance.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://arxiv.org/abs/2001.08361v1&quot;&gt;Kaplan et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;37&lt;/a&gt;&lt;/sup&gt; found that neural language model loss scales according to a power law—specifically, it scales with compute^0.050 and parameters^0.076&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;38&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://arxiv.org/abs/1712.00409&quot;&gt;Hestness et al. (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;39&lt;/a&gt;&lt;/sup&gt; found that neural network performance scales with parameters according to a power law across many domains, and that most domains scale between 0.07 and 0.35.&lt;/p&gt;

&lt;p&gt;The first paper is newer and covers seven orders of magnitude instead of only 2–3, but the second paper looks at multiple domains instead of just NLP.&lt;/p&gt;

&lt;p&gt;I transformed the Sevilla et al. data according to power laws, and then calculated the standard deviation in predicted performance growth:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Method&lt;/th&gt;
      &lt;th&gt;\(\sigma\)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;compute^0.050&lt;/td&gt;
      &lt;td&gt;7.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;parameters^0.076&lt;/td&gt;
      &lt;td&gt;9.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;parameters^0.350&lt;/td&gt;
      &lt;td&gt;51%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The first two numbers fall on the low end of the range that my other methods (industry revenue, benchmark progress, timeline forecasts) found. The third number—the upper end of the range found by Hestness et al. (2017)—gives a much larger number than any other method.&lt;/p&gt;

&lt;p&gt;I tested how good mission hedging looks with 51% volatility. If we invest in the global market portfolio, mission hedging beats ordinary investing if we can get a moderate correlation between AI progress and the hedge asset (r=0.4 or greater). If we invest in a factor portfolio, we need a correlation of at least r=0.7. (Under the compound-until-AGI utility function, the hedge asset looks much worse than the MVO asset regardless of the correlation.)&lt;/p&gt;

&lt;p&gt;A power law of parameters^0.35 gives a growth rate consistent with the observed performance growth across models in EFF’s ML benchmarks data set (34% and 32%, respectively). But it gives a much higher standard deviation of 51% compared to EFF’s 16%. I tend to believe that the high volatility in the Sevilla et al. data comes from a small sample size plus variation in ML model quality, so it does not reflect “true” volatility. (If we’re using model size as a proxy for performance, and a direct measure of performance disagrees, I trust the direct measure more.&lt;sup id=&quot;fnref:30&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;40&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;On the other hand, one could argue that ML model size predicts “true” AI progress, not just performance on certain benchmarks.&lt;/p&gt;

&lt;h2 id=&quot;appendix-c-optimal-equilibrium-allocation-to-mission-hedging&quot;&gt;Appendix C: Optimal equilibrium allocation to mission hedging&lt;/h2&gt;

&lt;p&gt;So far, I’ve focused on the marginal utility of changing portfolio allocations. We might also want to know the optimal allocation in equilibrium—the allocation we’d want if we could reallocate the entire EA portfolio at once.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/&quot;&gt;Previously&lt;/a&gt;, I wrote about mission hedging allocation under a simple utility function where marginal utility of wealth is linear with the mission target. Under this model, and using the range of parameter values I found for hedging AI:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The optimal allocation to the hedge asset varies from 5% to 24%.&lt;/li&gt;
  &lt;li&gt;The optimal allocation to the MVO asset varies from 296% to 538%.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Do not interpret these numbers as providing reasonable-in-practice upper and lower bounds. I arrived at these ranges by varying only three parameters:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;volatility of AI progress from 5% to 20%&lt;/li&gt;
  &lt;li&gt;correlation of the hedge to AI progress from 0.25 to 0.6&lt;/li&gt;
  &lt;li&gt;coefficient of relative risk aversion from 1.1 to 2&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I set the cost of leverage at 0% and put no restriction on maximum leverage. All other parameter values were taken as specified &lt;a href=&quot;#a-big-list-of-considerations&quot;&gt;above&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The assumptions of 0% cost of leverage and log-normal price distributions cause this model to overestimate optimal leverage.&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;41&lt;/a&gt;&lt;/sup&gt; Most other assumption cause this model to overestimate the optimal hedge allocation.&lt;/p&gt;

&lt;p&gt;If we cap leverage at 2:1, then this model suggests that philanthropists in aggregate should allocate 200% of their portfolios to the MVO asset and none to the hedge asset, even under the parameter values that most strongly favor mission hedging.&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;42&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h3 id=&quot;optimal-allocation-under-compound-until-agi-utility&quot;&gt;Optimal allocation under “compound-until-AGI” utility&lt;/h3&gt;

&lt;p&gt;In this section, I provide optimal allocations under the “compound-until-AGI” utility function given &lt;a href=&quot;#shape-of-the-utility-function&quot;&gt;above&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I will reference the same model parameters as in &lt;a href=&quot;https://mdickens.me/2022/04/04/model_of_mission-correlated_investing/#table-of-variables&quot;&gt;my previous essay&lt;/a&gt; without redefining them. See the previous essay for definitions. In this case, the mission target \(b\) is the number of years until transformative AI. (Higher \(b\) means slower AI progress.)&lt;/p&gt;

&lt;p&gt;First, some notes on the general behavior of this utility function:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Unlike with the simple utility function, expected utility does not vary linearly with every parameter.&lt;/li&gt;
  &lt;li&gt;Optimal hedge allocation \(\omega_h\) is concave with \(\sigma_b\).&lt;/li&gt;
  &lt;li&gt;\(\omega_h\) increases with \(\alpha_m, \alpha_h, \alpha_b, \gamma\).&lt;/li&gt;
  &lt;li&gt;\(\omega_h\) decreases with \(\sigma_m, \sigma_h, \rho\).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I started with a set of reasonable parameter values:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;For consistency with my previous essay, \(\alpha_m = 8\%, \sigma_m = 18\%, \alpha_h = 0\%, \sigma_h = 18\%\).&lt;/li&gt;
  &lt;li&gt;\(\gamma = 2, \rho = -0.6\) for RRA/correlations that relatively favor mission hedging.&lt;/li&gt;
  &lt;li&gt;\(\alpha_b = 3.58, \sigma_b = 0.99\) because these are the parameter values given by &lt;a href=&quot;https://www.metaculus.com/questions/3479/&quot;&gt;Metaculus question 3479&lt;/a&gt;. These values correspond to a mean timeline of \(\exp(3.58)\) = 36 years and a median timeline of \(\exp(3.58 - \frac{1}{2} 0.99^2)\) = 22 years.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(The correlation \(\rho\) is negative because if our hedge is positively correlated with AI progress, that means it’s negatively correlated with the AGI timeline.)&lt;/p&gt;

&lt;p&gt;With these parameters, the optimal allocation is 124% MVO asset, 18% hedge asset, for a relative allocation of 13% to the hedge.&lt;/p&gt;

&lt;p&gt;\(\sigma_h = 0.99\) is very close to the value that maximizes the optimal allocation to the hedge asset, and the maximum optimal allocation is only slightly higher (\(\sigma_h = 0.91\) gives 18.3%, versus 18.2% for \(\sigma_h = 0.99\)).&lt;/p&gt;

&lt;p&gt;A more optimistic return projection of \(\alpha_m = 10\%, \sigma_m = 13\%\) gives an optimal allocation of 291% MVO, 57% hedge, for a relative allocation of 16%.&lt;/p&gt;

&lt;p&gt;Less mission-hedge-favorable parameters \(\gamma = 1.1, \rho = -0.3\) give 164% MVO, 6% hedge, for a relative allocation of 4%.&lt;/p&gt;

&lt;p&gt;But remember that my model includes many assumptions that favor mission hedging, so the true optimal allocation is probably lower.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This quote is paraphrased from Holden Karnofsky’s &lt;a href=&quot;https://forum.effectivealtruism.org/posts/zGiD94SHwQ9MwPyfW/important-actionable-research-questions-for-the-most#Questions_about_AI__takeoff_dynamics___more_&quot;&gt;Important, actionable research questions for the most important century&lt;/a&gt;. &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;An obvious but wrong question to ask is, “what is the optimal overall allocation to mission hedging?” The right question is, “given the current aggregate portfolio of value-aligned altruists, how should I allocate my next dollar?” (Or next million dollars, or however much.) For small marginal changes, the answer will be either 100% or 0% mission hedging. For discussion on the optimal ultimate allocation, see &lt;a href=&quot;#appendix-c-optimal-equilibrium-allocation-to-mission-hedging&quot;&gt;Appendix C&lt;/a&gt;. &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Specifically, the model computes the gradient of expected utility. We should allocate marginal dollars in whichever direction the gradient is largest. &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:44&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Realistically, I expect that the EA community will never collectively use enough leverage, so it will always be beneficial for investors on the margin to add leverage to their portfolios. &lt;a href=&quot;#fnref:44&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:34&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Leveraged up to 30% volatility with a 2% cost of leverage, this implies a 12% return for the global market portfolio and a 19% return for a factor portfolio.&lt;/p&gt;

      &lt;p&gt;For the global market portfolio, rather than simply increasing leverage, I both increased leverage and increased the ratio of stocks to bonds. This is more efficient if we pay more than the risk-free rate for leverage. &lt;a href=&quot;#fnref:34&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For this metaphor, it technically matters how big the urn is. If you’re drawing from a finite urn without replacement, then if you double the number of draws, the probability of drawing a particular idea doesn’t double, it less than doubles. But for a sufficiently large urn, we can approximate the probability as linear.&lt;/p&gt;

      &lt;p&gt;It could be interesting to extend this urn metaphor to make it more accurate, and use that to better approximate how utility varies with AI timelines. But a more accurate urn metaphor would make mission hedging look worse, and it already doesn’t look good, so I would prefer to prioritize projects that could reverse the conclusion of this essay. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, I did find some, but they were highly implausible—e.g., the hedge asset would need to have a 300% standard deviation. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I used the Grace et al. timeline forecast with the &lt;a href=&quot;#compound-until-agi-utility-function&quot;&gt;compound-until-AGI&lt;/a&gt; utility function, but I did not use it with the standard utility function because I don’t have the necessary data to calculate the standard deviation of the implied rate of progress. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;It would have been nice to include up-to-date data, but I don’t have more recent data, and it’s still a large enough sample that the extra 7 years&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;43&lt;/a&gt;&lt;/sup&gt; wouldn’t make much difference. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Revenue per share growth isn’t exactly the same as total revenue growth because revenue per share is sensitive to changes in the share count. But the numbers are pretty similar. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We could also use negative log loss, which behaves superficially similarly to inverse loss. But inverse loss is nice because inverse loss varies geometrically, while negative log loss varies arithmetically. Our utility function combines investment returns with a measure of AI progress, and investment returns vary geometrically. Combining two geometric random variables is easier than combining a geomteric with an arithmetic random variable. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If the data starts in 1984, that means 1985 is the earliest year for which we can calculate the growth rate. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:37&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This question’s definition of AGI is probably overly narrow. I also looked at &lt;a href=&quot;https://www.metaculus.com/questions/5121/date-of-first-agi-strong/&quot;&gt;question 5121&lt;/a&gt;, which provides an operationalization with a higher bar for the definition of “AGI”. This question has a point-in-time standard deviation of 11.3% and a time series standard deviation of 1.6%. As of 2022-04-14, it has 315 total predictions across 124 forecasters.&lt;/p&gt;

      &lt;p&gt;In log space, question 5121 has a point-in-time standard deviation of 0.97 log-years. &lt;a href=&quot;#fnref:37&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang &amp;amp; Owain Evans (2017). &lt;a href=&quot;https://arxiv.org/abs/1705.08807&quot;&gt;When Will AI Exceed Human Performance? Evidence from AI Experts&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;&lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: I read the paper top-to-bottom, but without carefully scrutinizing it. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:33&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I looked at only a single survey to avoid issues with aggregating multiple surveys, e.g., combining conceptually distinct questions. I chose this survey in particular because &lt;a href=&quot;https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/#surveying-experts&quot;&gt;Holden claims&lt;/a&gt; it’s the best of its kind. &lt;a href=&quot;#fnref:33&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:32&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The arithmetic mean of the logarithm of a log-normally distributed sample is equal to the logarithm of the median, is equal to the logarithm of the geometric mean. &lt;a href=&quot;#fnref:32&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If you use the first to second quartile range, you get a standard deviation of 1.20 log-years. If you use the second to third quartile range, you get 1.32 log-years.&lt;/p&gt;

      &lt;p&gt;I approximated these numbers by eyeballing Figure 2 on page 3. They might be off by one or two years. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Actually it’s worse than that, because a mission hedge is much riskier than the market. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:52&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Sebastien Betermier, Laurent E. Calvet &amp;amp; Evan Jo (2019). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3440147&quot;&gt;A Supply and Demand Approach to Capital Markets.&lt;/a&gt; &lt;a href=&quot;#fnref:52&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;There are a few exceptions for parameter values where changing the parameter had only a minor effect on the output. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;According to one particular operationalization, Metaculus puts a &lt;a href=&quot;https://www.metaculus.com/questions/736/gwp-doubles-in-4-years-vs-1-year-by-2050/&quot;&gt;69% probability&lt;/a&gt; on a slow takeoff. But people betting on the negative side will (probably) only win once transformative AI has already emerged, which may disincentivize them from betting their true beliefs.&lt;/p&gt;

      &lt;p&gt;Not that it matters what I think, but I’d put 70% probability on a fast takeoff. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:50&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If your MVO asset is the global stock market, then it probably earns about the same return as a basket of AI stocks. A long-only factor portfolio could earn a higher return, which may or may not be enough to generate more expected utility than mission hedging. &lt;a href=&quot;#fnref:50&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:47&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Recall that people might overweight AI stocks if they expect them to outperform the market, but this is not the same thing as mission hedging. &lt;a href=&quot;#fnref:47&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We could construct a hedge with arbitrarily high correlation by buying something like an “AI progress swap”, but it would have negative alpha. &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:31&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;To be clear, this is my credence that all the given parameter values &lt;em&gt;in aggregate&lt;/em&gt; relatively favor mission hedging. &lt;a href=&quot;#fnref:31&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:41&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Owen Cotton-Barratt. &lt;a href=&quot;www.fhi.ox.ac.uk/law-of-logarithmic-returns/&quot;&gt;The Law of Logarithmic Returns.&lt;/a&gt; &lt;a href=&quot;#fnref:41&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:42&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Linch Zhang. &lt;a href=&quot;https://forum.effectivealtruism.org/posts/KsrcBkEPi68pcveoZ/seeking-feedback-on-new-ea-aligned-economics-paper?commentId=v6mJpCTshDjmtzBaF&quot;&gt;Comment on “Seeking feedback on new EA-aligned economics paper”.&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;Quoted in full:&lt;/p&gt;

      &lt;blockquote&gt;
        &lt;p&gt;The toy model I usually run with (note this is a mental model, I neither study academic finance nor do I spend basically any amount of time on modeling my own investments) is assuming that my altruistic investment is aiming to optimize for E(log(EA wealth)). Notably this means having an approximately linear preferences for the altruistic proportions of my own money, but suggests much more (relative) conservatism on investment for Good Ventures and FTX, or other would-be decabillionaires in EA. In addition, as previously noted, it would be good if I invested in things that aren’t heavily correlated with FB stock or crypto, assuming that I don’t have strong EMH-beating beliefs.&lt;/p&gt;
      &lt;/blockquote&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:42&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:38&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;An informal literature review by Gordon Irlam, &lt;a href=&quot;https://www.aacalc.com/docs/relative_risk_aversion&quot;&gt;Estimating the Coefficient of Relative Risk Aversion for Consumption&lt;/a&gt;[^39], found that estimates of the coefficient of relative risk aversion tended to vary from 1 to 4, with lower estimates coming from observations of behavior and higher estimates coming from direct surveys.&lt;/p&gt;

      &lt;p&gt;I trust observed behavior more than self-reported preferences, so I prefer to use a lower number for relative risk aversion. More importantly, individuals are risk averse in ways that philanthropists are not. For AI safety work specifically, we’d expect the rate of diminishing marginal utility to look similar to any other research field, and utility of spending on research tends to be &lt;a href=&quot;http://www.fhi.ox.ac.uk/law-of-logarithmic-returns/&quot;&gt;logarithmic&lt;/a&gt; (corresponding to a risk aversion coefficient of 1).&lt;/p&gt;

      &lt;p&gt;We could instead derive the coefficient of relative risk aversion from first principles, such as by treating the utility function as following the &lt;a href=&quot;#shape-of-the-utility-function&quot;&gt;“drawing ideas from an urn” model&lt;/a&gt; (which implies a coefficient between 1 and 2), or by using the approach developed by Owen Cotton-Barratt in &lt;a href=&quot;http://www.fhi.ox.ac.uk/theory-of-log-returns/&quot;&gt;Theory Behind Logarithmic Returns&lt;/a&gt;. &lt;a href=&quot;#fnref:38&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:43&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Takeoff would still have to be “slow” in the &lt;a href=&quot;https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai&quot;&gt;Christiano/Yudkowsky sense&lt;/a&gt;. fast. But it could be fast in the sense that AI starts meaningfully accelerating economic growth sooner than expected. &lt;a href=&quot;#fnref:43&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If \(\mu\) is the geometric mean and \(\sigma\) is the standard deviation, then the arithmetic mean approximately equals \(\mu + \frac{1}{2} \sigma^2\). &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:49&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;You can certainly still prioritize AI safety even if you don’t expect transformative AI to emerge for a long time. But it’s common for people in the AI safety community to have short timelines. &lt;a href=&quot;#fnref:49&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Facebook specifically has a historical volatility of around 30%. &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:45&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I could have used &lt;a href=&quot;https://www.aqr.com/Insights/Datasets/Time-Series-Momentum-Factors-Monthly&quot;&gt;AQR’s simulated time series momentum benchmark&lt;/a&gt;, which subtracts out estimated fees and costs. But the AQR benchmark performed much better, so I’m suspicious that their methodology overestimates true returns.&lt;/p&gt;

      &lt;p&gt;One might reasonably be concerned about survivorship bias in selecting a relatively old fund. To check for this, I compared the Chesapeake fund return to the Barclay CTA Index, and they have roughly comparable performance.&lt;/p&gt;

      &lt;p&gt;(You might ask, why didn’t I just use the Barclay CTA Index? Well, dear reader, that’s because Chesapeake provides monthly returns, but the Barclay CTA Index only publicly lists annual performance.)&lt;/p&gt;

      &lt;p&gt;That being said, the returns aren’t actually that important because we’re looking at correlations, not returns. &lt;a href=&quot;#fnref:45&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:48&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;On my computer, I have a folder filled with dozens of charts like this one. You probably don’t care about most of them. Honestly you probably don’t care about this one either, but it’s sufficiently relevant that I decided to include it. &lt;a href=&quot;#fnref:48&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:36&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, if yields go up, you should only expect to earn a higher total return if your time horizon is longer than the bond duration. &lt;a href=&quot;#fnref:36&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn &amp;amp; Pablo Villalobos (2022). &lt;a href=&quot;https://arxiv.org/abs/2202.05924&quot;&gt;Compute Trends Across Three Eras of Machine Learning.&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;&lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: I did not read this paper at all, but I did read the associated &lt;a href=&quot;https://www.alignmentforum.org/posts/XKtybmbjhC6mXDm5z/compute-trends-across-three-eras-of-machine-learning&quot;&gt;Alignment Forum post&lt;/a&gt;. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu &amp;amp; Dario Amodei (2020). &lt;a href=&quot;https://arxiv.org/abs/2001.08361v1&quot;&gt;Scaling Laws for Neural Language Models.&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;&lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: I skimmed to find the relevant figures. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, they used negative exponents because they were looking at how loss scales. I’m measuring performance in terms of inverse loss, so I’m using e.g. 0.05 instead of –0.05. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang &amp;amp; Yanqi Zhou (2017). &lt;a href=&quot;https://arxiv.org/abs/1712.00409&quot;&gt;Deep Learning Is Predictable, Empirically.&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;&lt;a href=&quot;https://mdickens.me/2021/10/08/do_i_read_citations/&quot;&gt;Read status&lt;/a&gt;: I skimmed to find the relevant figures. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:30&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;And ML benchmark performance is only a proxy for AI progress, so model size is a proxy for a proxy. &lt;a href=&quot;#fnref:30&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;A theoretical model that assumes log-normal price distributions and continuous rebalancing overestimates optimal leverage, but only by a small margin. See &lt;a href=&quot;https://mdickens.me/2020/06/21/samuelson_share_predict_optimal_leverage/&quot;&gt;Do Theoretical Models Accurately Predict Optimal Leverage?&lt;/a&gt; &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We can see this by calculating the gradient of expected utility at an allocation of (200% MVO asset, 0% hedge asset). The gradient is largest in the direction of the MVO asset.&lt;/p&gt;

      &lt;p&gt;The intuitive explanation for this result is that the model wants to use more than 200% leverage, so if we cap leverage at 200%, it wants to squeeze out as much expected return as it can, and mission hedging reduces expected return. &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I realize that 2013 was 9 years ago, but this type of data set usually has a lag time of about a year, so we’d only have full data through 2020. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>The Review Service Vortex of Death</title>
				<pubDate>Mon, 20 Jun 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/06/20/review_service_vortex/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/06/20/review_service_vortex/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Consumers want to know which products are good without having to buy them first. One way to do that is by reading reviews.&lt;/p&gt;

&lt;p&gt;There are third-party services that provide product reviews. Unfortunately, almost all of them are useless because they inevitably fall into the Review Service Vortex of Death.&lt;/p&gt;

&lt;p&gt;Review services have a fundamental incentives problem stemming from two facts:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Businesses don’t want to get bad reviews and they will pay a lot of money to have fake good reviews written, or to have bad reviews removed.&lt;/li&gt;
  &lt;li&gt;Consumers can’t tell when a review service is removing bad reviews.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Therefore, any review service, even if consumers pay to use it, is incentivized to accept businesses’ money to remove bad reviews, thus making the service useless for consumers. And it can get away with this behavior for a long time.&lt;/p&gt;

&lt;p&gt;The biggest reviewers—including Better Business Bureau, Yelp, and Trustpilot—have all fallen into the Review Service Vortex of Death, and should not be trusted by consumers, but they continue to be used because it’s not common knowledge that they delete bad reviews in exchange for money. (And indeed, it’s hard to even prove that they do.)&lt;/p&gt;

&lt;p&gt;What can consumers do about this? I don’t know. Businesses like Amazon, that make their money from retail sales, are less likely to fall into the Vortex, but they’re still vulnerable to businesses giving themselves fake reviews.&lt;/p&gt;

&lt;p&gt;(I did &lt;a href=&quot;https://mdickens.me/2021/04/05/comparison_of_DAF_providers/&quot;&gt;write a review article&lt;/a&gt; one time—a review of donor-advised fund providers—and a couple of providers have subsequently emailed me to ask me to include them in my article. But sadly&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, they didn’t offer me any bribes.)&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This is a joke. If a company offered me a bribe to include them, then I’d have to exclude them as a matter of principle. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Index Funds That Vote Are Active, Not Passive</title>
				<pubDate>Wed, 20 Apr 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/04/20/index_funds_that_vote_are_active/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/04/20/index_funds_that_vote_are_active/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;As an investor, you might invest in an index fund because you want to get unbiased, diversified exposure to the market. You don’t have to figure out which stocks will beat the market if you simply buy every stock.&lt;/p&gt;

&lt;p&gt;But when you invest in an index fund, usually that means the fund can now vote on your behalf. The more stock an index fund owns, the more voting power it has. Generally speaking, the big index fund providers (including &lt;a href=&quot;https://corporate.vanguard.com/content/dam/corp/advocate/investment-stewardship/pdf/policies-and-reports/2021_proxy_voting_policies.pdf&quot;&gt;Vanguard&lt;/a&gt; and &lt;a href=&quot;https://www.blackrock.com/corporate/literature/fact-sheet/blk-responsible-investment-guidelines-us.pdf&quot;&gt;BlackRock&lt;/a&gt;) will vote in ways that align with their own corporate values—their top (stated) priorities are to increase climate change mitigation and workforce gender/racial diversity.&lt;/p&gt;

&lt;p&gt;Regardless of whether you &lt;em&gt;want&lt;/em&gt; this voting behavior, it means these index funds are not passive. By putting your money in an index fund that votes, you are implicitly claiming that it will make better voting decisions than the market.&lt;/p&gt;

&lt;p&gt;(For that matter, any time you hold something other than the global market portfolio, you’re making an active bet. Sadly (and surprisingly), there aren’t any single index funds that offer the global market portfolio. But I digress.)&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>A Preliminary Model of Mission-Correlated Investing</title>
				<pubDate>Mon, 04 Apr 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/04/04/model_of_mission-correlated_investing/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/04/04/model_of_mission-correlated_investing/</guid>
                <description>
                  
                  
                  
                  &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; According to my preliminary model, the altruistic investing portfolio should ultimately allocate 5–20% on a risk-adjusted basis to mission-correlated investing. But for the current EA portfolio, it’s better on the margin to increase its risk-adjusted return than to introduce mission-correlated investments.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Last updated 2022-04-06.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The purpose of &lt;a href=&quot;https://forum.effectivealtruism.org/posts/YApaCr98Q7wSYcaoB/mission-correlation-more-than-just-hedging&quot;&gt;mission-correlated investing&lt;/a&gt; is to earn more money in worlds where your money matters more. For instance, if you’re working to prevent climate change, you could buy stock in oil companies. In worlds where oil companies are more successful and climate change gets worse, you make more money.&lt;/p&gt;

&lt;p&gt;Previous work by &lt;a href=&quot;https://pubs.aeaweb.org/doi/pdfplus/10.1257/aeri.20180347&quot;&gt;Roth Tran (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; proved that, under certain weak assumptions, philanthropists should invest more in so-called “evil” companies than they would from a pure profit-making standpoint. This result follows from the assumption that a philanthropist’s actions become more cost-effective when the world gets worse along some dimension.&lt;/p&gt;

&lt;p&gt;That’s an interesting result. But all it says is altruists should invest more than zero in mission hedging. How much more? Am I supposed to allocate 1% of my wealth to mission-correlated assets? 5%? 100%?&lt;/p&gt;

&lt;p&gt;To answer this question, I extended the &lt;a href=&quot;https://en.wikipedia.org/wiki/Modern_portfolio_theory&quot;&gt;standard&lt;/a&gt; portfolio choice problem to allow for mission-correlated investing. This model makes the same assumptions as the standard problem—asset prices follow lognormal distributions, people experience constant relative risk aversion, etc.—plus the assumption that utility of money increases linearly with the quantity of the mission target, e.g., because the more CO2 there is in the atmosphere, the cheaper it is to extract.&lt;/p&gt;

&lt;p&gt;I used this model to find some preliminary results. Future work should further explore the &lt;a href=&quot;#how-accurate-is-this-model&quot;&gt;model setup&lt;/a&gt; and the relevant empirical questions, which I discuss further in the &lt;a href=&quot;#future-work&quot;&gt;future work&lt;/a&gt; section.&lt;/p&gt;

&lt;p&gt;Here are the answers the model gives, with my all-things-considered &lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;confidence&lt;/a&gt; in each:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Given no constraints, philanthropists should allocate somewhere between 2% and 40% to mission hedging on a risk-adjusted basis,&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; depending on what assumptions we make. &lt;em&gt;Confidence: Somewhat likely.&lt;/em&gt;
&lt;a href=&quot;#unconstrained-results&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Given no constraints, and using my best-guess input parameters:
    &lt;ul&gt;
      &lt;li&gt;Under this model, a philanthropist who wants to hedge a predictable outcome, such as CO2 emissions, should allocate ~5% (risk-adjusted) to mission hedging.&lt;/li&gt;
      &lt;li&gt;Under this model, a philanthropist who wants to hedge a more volatile outcome, for example AI progress, should allocate ~20% to mission hedging on a risk-adjusted basis.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;If you can’t use leverage, then you shouldn’t mission hedge unless mission hedging looks especially compelling. &lt;em&gt;Confidence: Likely.&lt;/em&gt;
&lt;a href=&quot;#results-with-a-leverage-constraint&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;If you currently invest most of your money in a legacy investment that you’d like to reduce your exposure to, then it’s more important on the margin to seek high expected return than to mission hedge. &lt;em&gt;Confidence: Likely.&lt;/em&gt;
&lt;a href=&quot;#results-with-a-legacy-investment&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;The optimal allocation to mission hedging is proportional to: (&lt;em&gt;Confidence: Likely&lt;/em&gt;)
    &lt;ol&gt;
      &lt;li&gt;the correlation between the hedge and the mission target being hedged;&lt;/li&gt;
      &lt;li&gt;the standard deviation of the mission target;&lt;/li&gt;
      &lt;li&gt;your degree of risk tolerance;&lt;/li&gt;
      &lt;li&gt;the inverse of the standard deviation of the hedge.&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/6wwjd8kZWY5ew9Zvy/a-preliminary-model-of-mission-correlated-investing&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#setup&quot; id=&quot;markdown-toc-setup&quot;&gt;Setup&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#the-goal&quot; id=&quot;markdown-toc-the-goal&quot;&gt;The goal&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-utility-function&quot; id=&quot;markdown-toc-the-utility-function&quot;&gt;The utility function&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#some-more-assumptions&quot; id=&quot;markdown-toc-some-more-assumptions&quot;&gt;Some more assumptions&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#table-of-variables&quot; id=&quot;markdown-toc-table-of-variables&quot;&gt;Table of variables&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#results&quot; id=&quot;markdown-toc-results&quot;&gt;Results&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#unconstrained-results&quot; id=&quot;markdown-toc-unconstrained-results&quot;&gt;Unconstrained results&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#general-solution&quot; id=&quot;markdown-toc-general-solution&quot;&gt;General solution&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#specific-numeric-results&quot; id=&quot;markdown-toc-specific-numeric-results&quot;&gt;Specific numeric results&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#results-with-a-leverage-constraint&quot; id=&quot;markdown-toc-results-with-a-leverage-constraint&quot;&gt;Results with a leverage constraint&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#results-with-a-legacy-investment&quot; id=&quot;markdown-toc-results-with-a-legacy-investment&quot;&gt;Results with a legacy investment&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#how-accurate-is-this-model&quot; id=&quot;markdown-toc-how-accurate-is-this-model&quot;&gt;How accurate is this model?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#future-work&quot; id=&quot;markdown-toc-future-work&quot;&gt;Future work&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgments&quot; id=&quot;markdown-toc-acknowledgments&quot;&gt;Acknowledgments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendices&quot; id=&quot;markdown-toc-appendices&quot;&gt;Appendices&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-a-mission-leveraging&quot; id=&quot;markdown-toc-appendix-a-mission-leveraging&quot;&gt;Appendix A: Mission leveraging&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-b-why-use-this-approach&quot; id=&quot;markdown-toc-appendix-b-why-use-this-approach&quot;&gt;Appendix B: Why use this approach?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-c-proof-of-analytic-solution&quot; id=&quot;markdown-toc-appendix-c-proof-of-analytic-solution&quot;&gt;Appendix C: Proof of analytic solution&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-d-tables-of-optimization-results&quot; id=&quot;markdown-toc-appendix-d-tables-of-optimization-results&quot;&gt;Appendix D: Tables of optimization results&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-e-numeric-approximation-of-the-optimal-solution&quot; id=&quot;markdown-toc-appendix-e-numeric-approximation-of-the-optimal-solution&quot;&gt;Appendix E: Numeric approximation of the optimal solution&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;setup&quot;&gt;Setup&lt;/h1&gt;

&lt;h2 id=&quot;the-goal&quot;&gt;The goal&lt;/h2&gt;

&lt;p&gt;We have some utility function that says how much we value the state of the world. Our utility function cares about two things: (1) our wealth and (2) some mission target. The mission target is a measure of an outcome we care about. For example, if we want to hedge climate change, the mission target is the level of CO2 in the atmosphere. If we’re hedging factory farming, the mission target could be the number of animals on factory farms.&lt;/p&gt;

&lt;p&gt;Suppose there are two things we can invest in: an MVO (mean-variance optimal) asset and a hedge asset. (Note: usually MVO stands for &lt;a href=&quot;https://www.effisols.com/basics/MVO.htm&quot;&gt;“mean-variance optimization”&lt;/a&gt;, but I’m using it to mean “mean-variance optimal”.) Let the MVO asset be the investment that maximizes expected utility of wealth without regard to the mission target&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;. Let the hedge asset be an investment that’s correlated with the mission target and uncorrelated with the MVO asset—for example, a climate change hedge could consist of buying oil stocks + shorting the S&amp;amp;P 500. We choose some proportions of our wealth to invest in each of the MVO asset and the hedge asset. This total could add up to more than 100% (by using leverage) or it could be negative (by short selling).&lt;/p&gt;

&lt;p&gt;Then we want to find the asset proportions that maximize expected utility.&lt;/p&gt;

&lt;p&gt;As an example, say we can invest in two securities: XOM (Exxon Mobil) and SPY (S&amp;amp;P 500 index fund). The MVO asset might be {90% SPY, 10% XOM}, and the hedge asset might be {100% XOM, –100% SPY}. That means, by assumption, {90% SPY, 10% XOM} maximizes expected utility when holding the mission target constant.&lt;/p&gt;

&lt;p&gt;If we invest 90% in the MVO asset and 10% in the hedge asset, then our allocation becomes&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;        0.9 (90% SPY + 10% XOM) + 0.1 (100% XOM – 100% SPY)
        = 71% SPY + 19% XOM
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That is, 71% long SPY and 19% long XOM, with the remaining 10% in cash.&lt;/p&gt;

&lt;p&gt;The hedge asset is a combination of positions that (by definition) has no correlation to the MVO asset. But we might never invest directly in the hedge asset. In the example above, even though the hedge includes both long and short positions, our final allocation doesn’t require short-selling. And in practice, you probably wouldn’t think about how to allocate between (90% SPY, 10% XOM) and (100% XOM, -100% SPY). You’d just think about how to allocate between SPY and XOM. But for the sake of analyzing the problem, it’s easier to think in terms of a mean-variance optimal asset vs. an uncorrelated hedge asset.&lt;/p&gt;

&lt;p&gt;Importantly, mission hedging climate change doesn’t just mean we allocate more than 0% to oil stocks. It means we allocate &lt;em&gt;more to oil stocks than a typical investor would&lt;/em&gt;. In this example, a normal investor allocates 10% to XOM, while a mission hedger allocates 19%.&lt;/p&gt;

&lt;p&gt;Note A: Mission hedging is a special case of &lt;a href=&quot;https://forum.effectivealtruism.org/posts/YApaCr98Q7wSYcaoB/mission-correlation-more-than-just-hedging&quot;&gt;mission-correlated investing&lt;/a&gt;. It’s the most intuitive case, so I will focus on it in this essay, but the model I present could be adapted to apply to other types of mission-correlated investing. See &lt;a href=&quot;#appendix-a-mission-leveraging&quot;&gt;Appendix A&lt;/a&gt; for how this could be done.&lt;/p&gt;

&lt;p&gt;Note B: This model of mission hedging uses an extension of mean-variance optimization, but it’s not obvious that this is a good approach. See &lt;a href=&quot;#appendix-b-why-use-this-approach&quot;&gt;Appendix B&lt;/a&gt; for more on why I used this method.&lt;/p&gt;

&lt;h2 id=&quot;the-utility-function&quot;&gt;The utility function&lt;/h2&gt;

&lt;p&gt;Following &lt;a href=&quot;https://pubs.aeaweb.org/doi/pdfplus/10.1257/aeri.20180347&quot;&gt;Roth Tran (2019)&lt;/a&gt;, we want our utility function to meet the following criteria. Plain descriptions on the left, mathematical definitions on the right, with U = utility, W = wealth, b = mission target (which Roth Tran called a “bad activity”, although in my model, it’s not necessarily bad):&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;More wealth is better&lt;/td&gt;
      &lt;td&gt;\(\displaystyle\frac{dU}{dW} &amp;gt; 0\)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;More of the mission target is worse&lt;/td&gt;
      &lt;td&gt;\(\displaystyle\frac{dU}{db} &amp;lt; 0\)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;More of the mission target makes wealth more valuable&lt;/td&gt;
      &lt;td&gt;\(\displaystyle\frac{d^2U}{dW db} &amp;gt; 0\)&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;This describes a broad class of utility functions—so broad that it can’t give much practical guidance. Let’s pin down some more requirements:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Wealth has diminishing marginal utility&lt;/td&gt;
      &lt;td&gt;\(\displaystyle\frac{d^2 U}{dW^2} &amp;lt; 0\)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Sufficient wealth can eliminate almost all of the mission target, but never quite all of it&lt;/td&gt;
      &lt;td&gt;\(\displaystyle\lim\limits_{W \rightarrow \infty} \displaystyle\frac{dU}{db} = 0\)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;We have &lt;a href=&quot;https://en.wikipedia.org/wiki/Isoelastic_utility&quot;&gt;constant relative risk aversion&lt;/a&gt; with respect to wealth&lt;/td&gt;
      &lt;td&gt;\(-W \displaystyle\frac{d^2 U}{dW^2} / \frac{dU}{dW} = \gamma\)&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;A constant relative risk aversion (CRRA) utility function looks like&lt;/p&gt;

&lt;p&gt;\begin{align}
U(W) = \displaystyle\frac{W^{1 - \gamma} - 1}{1 - \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\gamma\) (that’s the Greek letter gamma) is a constant that’s greater than 1. For our utility function to account for the mission target, we need to ensure (a) utility decreases with the mission target and (b) wealth mitigates the mission target, up to a maximum of 100% mitigation.&lt;/p&gt;

&lt;p&gt;So we can define&lt;/p&gt;

&lt;p&gt;\begin{align}
U(W, b) = f(b) \displaystyle\frac{W^{1 - \gamma} - 1}{1 - \gamma} - \frac{f(b)}{\gamma - 1}
\end{align}&lt;/p&gt;

&lt;p&gt;given some function \(f(b)\) that defines how utility scales with the mission target.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The \(\frac{W^{1 - \gamma} - 1}{1 - \gamma}\) term says that the utility of wealth scales with \(f(b)\). The \(\frac{f(b)}{\gamma - 1}\) term says that total utility decreases with \(f(b)\).&lt;/p&gt;

&lt;p&gt;What is the shape of the badness function? As an example, consider climate change. If the atmosphere contains twice as much CO2, you can remove it approximately twice as quickly.&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; In general, interventions to prevent a bad activity are plausibly twice as effective when there’s twice as much of the bad thing. So we can say \(f(b) = b\).&lt;/p&gt;

&lt;p&gt;Our utility function simplifies to&lt;/p&gt;

&lt;p&gt;\begin{align}
U(W, b) = b \displaystyle\frac{W^{1 - \gamma}}{1 - \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;For example, \(\gamma = 1.5\) gives&lt;/p&gt;

&lt;p&gt;\begin{align}
U(W, b) = \displaystyle\frac{-2 b}{\sqrt{W}}
\end{align}&lt;/p&gt;

&lt;p&gt;If we wanted to, we could generalize this to \(U(W, b) = \displaystyle\frac{b^\lambda W^{1 - \gamma}}{1 - \gamma}\) for some constant \(\lambda\). That would allow utility to change non-linearly with \(b\), which more accurately describes many real-world situations. For this essay, to keep things simple, I will stick with \(\lambda = 1\).&lt;/p&gt;

&lt;p&gt;These assumptions imply that we should mission hedge rather than &lt;a href=&quot;#appendix-a-mission-leveraging&quot;&gt;mission leverage&lt;/a&gt;. But there are related assumptions that suggest we should mission leverage. For example, if we change the third condition to \(\displaystyle\frac{d^2U}{dW db} &amp;lt; 0\), that means wealth becomes more valuable as the mission target increases, so we should mission leverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this utility function?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most of the listed criteria make perfect sense—e.g., of course more wealth is better and more of the mission target is worse. It’s not always true that when the mission target gets worse, our spending becomes more cost-effective. But it’s true in some cases, and those are the cases where we’d want to consider mission hedging.&lt;/p&gt;

&lt;p&gt;The weirdest thing about this utility function: it has an upper bound. Why is that? Couldn’t we produce arbitrarily high value for the world if we had sufficient wealth?&lt;/p&gt;

&lt;p&gt;A bounded utility function makes sense if we think of ourselves as committed to minimizing the harm of a particular mission target. For instance, there’s an upper bound to how much climate change we can prevent, namely, all of it. If we mitigate 100% of climate change, that provides an upper bound on how much good we can do.&lt;/p&gt;

&lt;p&gt;Note: This utility function implies that impact investing doesn’t work—the way you invest doesn’t directly change the world. That might be false, but I’ll treat it as true for now.&lt;/p&gt;

&lt;h2 id=&quot;some-more-assumptions&quot;&gt;Some more assumptions&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The philanthropist doesn’t care about anyone else’s investment portfolio.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In reality, &lt;a href=&quot;https://mdickens.me/2022/03/18/altruistic_investors_care_about_aggregate_altruistic_portfolio/&quot;&gt;philanthropists should value&lt;/a&gt; the investments of other like-minded people. Instead of maximizing the expected utility of their personal portfolio, they should consider the aggregate portfolio across all value-aligned investors. They should invest in whatever way best improves the aggregate portfolio on the margin.&lt;/p&gt;

&lt;p&gt;However, it makes more sense to start by modeling what philanthropists should do in the aggregate, and then later use that to figure out what to do on the margin. This post focuses on that first step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The risk-free rate is 0%.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A more sophisticated model should include a term for the risk-free rate, but it’s simpler to assume that the rate is 0%, and this simplification doesn’t change much.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All random variables are lognormally distributed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It’s common to assume that asset prices follow &lt;a href=&quot;https://en.wikipedia.org/wiki/Log-normal_distribution&quot;&gt;lognormal distributions&lt;/a&gt;. I also assume that the quantity of mission target is lognormally distributed, which makes sense if it’s something that tends to grow exponentially (CO2 emissions, AI progress, and numbers of farmed animals probably all behave this way).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The MVO asset has an arithmetic mean return of 8% and a standard deviation of 18%.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I could spend all day talking about market expectations, but in short, I believe these numbers appropriately represent an aggressive but appropriately-diversified portfolio (e.g., a global equity index fund).&lt;/p&gt;

&lt;p&gt;The optimal degree of mission hedging isn’t determined by the expected return or standard deviation alone, but by the ratio between the two.&lt;/p&gt;

&lt;p&gt;(I wrote more about market expectations in &lt;a href=&quot;https://mdickens.me/2022/04/01/how_I_estimate_future_investment_returns/&quot;&gt;a previous essay&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The hedge asset has an arithmetic mean return of 0% and a standard deviation of 18%.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By assumption, the MVO asset is the optimal selfish portfolio, and the hedge asset has zero correlation with the MVO asset. That means the optimal selfish portfolio contains no allocation to the hedge, and therefore the hedge asset &lt;em&gt;cannot&lt;/em&gt; have an expected return greater than 0%.&lt;/p&gt;

&lt;p&gt;If expected return equals 0%, then you can add leverage or mix in cash to get whatever standard deviation you want. I set the standard deviation at 18% so it’s the same as the MVO asset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The MVO asset has zero correlation to the mission target.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This might not actually be true, but it keeps things simple. To the extent that the MVO asset is positively correlated with the mission target, that makes investing in the MVO asset look more compelling. It does not reduce the optimal allocation to the hedge, but it does reduce the optimal &lt;em&gt;relative&lt;/em&gt; allocation. For example, increasing the correlation might change the optimal allocation of (MVO, hedge) from (200%, 30%) to (210%, 30%).&lt;/p&gt;

&lt;h2 id=&quot;table-of-variables&quot;&gt;Table of variables&lt;/h2&gt;

&lt;p&gt;The model uses the following variables. &lt;em&gt;m, h, b&lt;/em&gt; follow lognormal distributions (and thus their logarithms follow normal distributions), and &lt;em&gt;x&lt;/em&gt; can refer to any one of &lt;em&gt;m, h, b&lt;/em&gt;.&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;m&lt;/td&gt;
      &lt;td&gt;MVO asset price&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;h&lt;/td&gt;
      &lt;td&gt;hedge asset price&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;b&lt;/td&gt;
      &lt;td&gt;mission target&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;W&lt;/td&gt;
      &lt;td&gt;wealth&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\alpha_x\)&lt;/td&gt;
      &lt;td&gt;log(mean of x)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\mu_x\)&lt;/td&gt;
      &lt;td&gt;mean of log(x) = log(geometric mean of x)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\sigma_x\)&lt;/td&gt;
      &lt;td&gt;standard deviation of log(x)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\sigma_{xy}\)&lt;/td&gt;
      &lt;td&gt;covariance between log(x) and log(y)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\rho\)&lt;/td&gt;
      &lt;td&gt;correlation between log(h) and log(b)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\gamma\)&lt;/td&gt;
      &lt;td&gt;coefficient of risk aversion&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\omega_x\)&lt;/td&gt;
      &lt;td&gt;proportion of wealth allocated to asset x&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h1 id=&quot;results&quot;&gt;Results&lt;/h1&gt;

&lt;p&gt;We now have a model of the expected utility of mission hedging. What does this model say a philanthropist should do?&lt;/p&gt;

&lt;p&gt;That depends on the inputs. This model has four free variables:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;\(\rho\): correlation between the hedge asset and the mission target&lt;/li&gt;
  &lt;li&gt;\(\gamma\): risk aversion&lt;/li&gt;
  &lt;li&gt;\(\alpha_b\): arithmetic mean growth rate of the mission target&lt;/li&gt;
  &lt;li&gt;\(\sigma_b\): standard deviation of the growth rate of the mission target&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The model outputs are \(\omega_m\) and \(\omega_h\): the optimal allocations to the MVO asset and the hedge asset, respectively.&lt;/p&gt;

&lt;h2 id=&quot;unconstrained-results&quot;&gt;Unconstrained results&lt;/h2&gt;

&lt;p&gt;In this section, I provide a formula for the solution, then I offer some &lt;a href=&quot;#specific-numeric-results&quot;&gt;concrete answers&lt;/a&gt; for particular numeric inputs.&lt;/p&gt;

&lt;h3 id=&quot;general-solution&quot;&gt;General solution&lt;/h3&gt;

&lt;p&gt;The MVO asset’s optimal portfolio allocation \(\omega_m\) is given by&lt;/p&gt;

&lt;p&gt;\begin{align}
\omega_m = \displaystyle\frac{\alpha_m}{\sigma_m^2 \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\alpha_m\) and \(\sigma_m\) are the arithmetic mean and standard deviation of the MVO asset, respectively.&lt;/p&gt;

&lt;p&gt;(This is identical to the solution to &lt;a href=&quot;https://www.gordoni.com/lifetime_portfolio_selection.pdf&quot;&gt;Merton’s portfolio problem&lt;/a&gt; for an ordinary investor.)&lt;/p&gt;

&lt;p&gt;And optimal hedge allocation \(\omega_h\) equals&lt;/p&gt;

&lt;p&gt;\begin{align}
\omega_h = \displaystyle\frac{\sigma_{bh}}{\sigma_h^2 \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\sigma_{bh}\) is the covariance of the mission target and the hedge, and \(\sigma_h\) is the standard deviation of the hedge.&lt;/p&gt;

&lt;p&gt;We could also write this as&lt;/p&gt;

&lt;p&gt;\begin{align}
\omega_h = \displaystyle\frac{\rho \sigma_b}{\sigma_h \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\sigma_b\) is the standard deviation of the mission target.&lt;/p&gt;

&lt;p&gt;That means the optimal allocation to the hedge asset is proportional to its correlation with the mission target, proportional to the mission target’s standard deviation, and inversely proportional to the standard deviation of the hedge and to the philanthropist’s degree of risk aversion.&lt;/p&gt;

&lt;p&gt;(See &lt;a href=&quot;#appendix-c-proof-of-analytic-solution&quot;&gt;Appendix C&lt;/a&gt; for proof.)&lt;/p&gt;

&lt;p&gt;Some qualitative observations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;\(\omega_m\) and \(\omega_h\) are independent, except that they both depend on risk aversion \(\gamma\).&lt;/li&gt;
  &lt;li&gt;\(\omega_h\) increases with \(\rho\) and \(\sigma_b\), decreases with \(\sigma_h\) and \(\gamma\), and does not change with \(\alpha_b\). This makes intuitive sense:
    &lt;ul&gt;
      &lt;li&gt;We want to mission hedge more when our hedge is more effective (higher covariance with the mission target), including when the mission target is more volatile.&lt;/li&gt;
      &lt;li&gt;We want to mission hedge less when the hedge is more volatile. We get more “bang for our buck” with each dollar we put in the hedge, so we don’t need as much.&lt;/li&gt;
      &lt;li&gt;We want to mission hedge less when we’re more risk-averse, because we prefer a safer (less volatile) portfolio.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;It’s perhaps not immediately obvious why \(\omega_h\) varies with \(\sigma_b\) but not \(\alpha_b\). The explanation is that we’re trying to hedge against future worlds where the mission target is &lt;em&gt;unexpectedly&lt;/em&gt; prevalent, which is more likely to happen when \(\sigma_b\) is large.&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; \(\alpha_b\) doesn’t affect how the hedge covaries with the mission target.&lt;/li&gt;
  &lt;li&gt;The optimal relative allocation \(\displaystyle\frac{\omega_h}{\omega_m + \omega_h}\) does not depend on risk aversion \(\gamma\). In other words, risk aversion doesn’t change our relative preference for hedging vs. traditional investing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;specific-numeric-results&quot;&gt;Specific numeric results&lt;/h3&gt;

&lt;p&gt;When provided reasonable inputs, what output does this model give?&lt;/p&gt;

&lt;p&gt;Within a reasonable range of inputs, the optimal relative allocation to the hedge asset \(\displaystyle\frac{\omega_h}{\omega_m + \omega_h}\) falls between 1.7% and 29%. So, at least given the model assumptions described previously, we know that 50% is too much and 0.1% is not enough.&lt;/p&gt;

&lt;p&gt;Specifically, I tested input values within these ranges:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;\(\rho\) (correlation) from 0.25 (weak hedge) to 0.9 (very strong hedge)&lt;/li&gt;
  &lt;li&gt;\(\gamma\) (risk aversion) from 1.1 (approximately logarithmic) to 2 (substantially risk-averse)&lt;/li&gt;
  &lt;li&gt;\(\sigma_b\) (mission target volatility) from 3% (highly stable thing, e.g., CO2 emissions&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;) to 20% (volatile or hard-to-predict thing, e.g., AI progress&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Mission hedging looks most favorable when \(\rho = 0.9, \sigma_b = 20\%\). With those parameters, the optimal portfolio allocates 71% to the MVO asset and 29% to the hedge in relative terms. The absolute allocations depend on \(\gamma\). With \(\gamma = 1.1\), the optimal allocation is 227% MVO, 91% hedge (giving 3.18:1 leverage). \(\gamma = 2\) gives 124% MVO, 50% hedge.&lt;/p&gt;

&lt;p&gt;(I do think it’s plausible that \(\sigma_b &amp;gt; 20\%\) for some causes, so the optimal relative allocation to mission hedging could perhaps be higher than 29%. \(\sigma_b = 30\%\) gives a relative allocation of up to 38%.)&lt;/p&gt;

&lt;p&gt;Mission hedging looks least favorable when \(\rho = 0.25, \sigma_b = 3\%\). In that case, the optimal relative allocation is 98.3% MVO, 1.7% hedge.&lt;/p&gt;

&lt;p&gt;Mission hedging boosts expected utility by 0.01% on the low end and 0.3% on the high end.&lt;/p&gt;

&lt;p&gt;It’s still an open question as to whether we should invest closer to 1.7% or closer to 29% in mission hedging. That depends on (a) the correlation between the hedge and the mission target and (b) the volatility of the mission target, which I will leave as questions for future research.&lt;/p&gt;

&lt;p&gt;For bigger tables of results, see &lt;a href=&quot;#appendix-d-tables-of-optimization-results&quot;&gt;Appendix D&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;results-with-a-leverage-constraint&quot;&gt;Results with a leverage constraint&lt;/h2&gt;

&lt;p&gt;All the optimal portfolios so far have required us to use leverage—sometimes a lot of leverage. But many investors can’t, at least not easily. What happens if we constrain the solution to disallow leverage?&lt;/p&gt;

&lt;p&gt;This changes the results in one key way: Unless we’re particularly risk-averse, we can’t take on as much risk as we want to. If we want to mission hedge, we have to give up some expected return. That doesn’t mean we should &lt;em&gt;never&lt;/em&gt; mission hedge, but it does change the tradeoffs.&lt;/p&gt;

&lt;p&gt;Specifically, it means we should allocate 0% to mission hedging unless (a) mission hedging looks particularly compelling (high volatility of the mission target, high correlation) or (b) we’re very risk-averse, to the point that we wouldn’t want to use leverage anyway (given the return/risk expectations I used, this happens when \(\gamma &amp;gt; 3\)). At \(\sigma_b = 20\%, \gamma = 1.5\), we only want to mission hedge if \(r \ge 0.88\), which is probably unattainable.&lt;/p&gt;

&lt;h2 id=&quot;results-with-a-legacy-investment&quot;&gt;Results with a legacy investment&lt;/h2&gt;

&lt;p&gt;One common scenario: You have a lot of money in some position. It’s &lt;a href=&quot;https://mdickens.me/2020/10/18/risk_of_concentrating/&quot;&gt;not well-diversified&lt;/a&gt;, but you still hold it for historical reasons. Maybe you don’t want to sell all of your legacy asset right away, but you can sell a little bit and invest the proceeds in something else. Should you prioritize mission hedging, or is it better to invest in the mean-variance optimal asset?&lt;/p&gt;

&lt;p&gt;Under this model, you should prefer the MVO asset, even given parameters that heavily lean toward mission hedging.&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; Even if we already invest half our portfolio in the MVO asset (and the other half in the legacy asset), it’s still (slightly) better on the margin to move money into MVO.&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This is a much stronger result than what we got from the unconstrained model—the input parameters don’t change the outcome except at the extreme high end.&lt;/p&gt;

&lt;h1 id=&quot;how-accurate-is-this-model&quot;&gt;How accurate is this model?&lt;/h1&gt;

&lt;p&gt;Getting results out of a model always requires making a lot of assumptions. And every time you make an assumption, that’s another opportunity for the model to diverge from reality.&lt;/p&gt;

&lt;p&gt;My mission hedging framework includes a lot of parameters, which are easy to modify. It also builds in some less-easily-fixable properties, including:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;This model treats investment returns as lognormal. In reality, investments are more likely to show big declines than a lognormal distribution would suggest. A more accurate model would invest more conservatively, but it’s not clear how this would affect mission hedging. Intuitively, I’d expect an improved model to relatively favor whichever asset has a smaller left skew.&lt;/li&gt;
  &lt;li&gt;This model uses a particular class of utility functions. These utility functions have certain nice features, for example, the optimal asset allocation doesn’t depend on your time horizon or on how much money you start with. A philanthropist’s real-life utility function probably doesn’t look like the one I used, but real-life utility functions are notoriously difficult to ascertain. (We could perhaps improve the utility function by further investigating how our ability to do good varies with the target we’re trying to hedge.)&lt;/li&gt;
  &lt;li&gt;This model treats leverage, short positions, and trading as free. But even for investors who can use leverage and shorts, they have to pay some extra cost for those.&lt;/li&gt;
  &lt;li&gt;This model ignores the direct effects of investing. Buying stock in a company might cause the company to become more successful. And philanthropists might want to prioritize impact investing over mission hedging or profit maximization.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The most robust output of this model is the result with a legacy investment: if you currently hold most of your money in a legacy investment, then it’s better on the margin to shift toward the mean-variance optimal portfolio than to shift toward mission hedging. This result held true regardless of how I changed the model parameters (within reason). However, this result still depends on the choice of utility function, and it’s possible to construct a utility function where it’s better on the margin to put funds toward mission hedging.&lt;/p&gt;

&lt;p&gt;For an unconstrained portfolio, it’s unclear how much to mission hedge. This model can produce a wide range of numbers depending on input parameters. The choice of risk aversion has a fairly small impact, choice of correlation has a moderate impact, and the standard deviation of the mission target matters a lot.&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; This suggests that, if we want to know more about what to do, we should prioritize figuring out the mission target’s volatility.&lt;/p&gt;

&lt;h1 id=&quot;future-work&quot;&gt;Future work&lt;/h1&gt;

&lt;p&gt;Six ideas for projects that could help philanthropists decide how to mission hedge:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. What changes would make mission hedging worthwhile on the margin?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The strongest conclusion of my model is that, on the margin, mission hedging is worse than improving the risk-adjusted return of an investment portfolio. This result held across a range of input parameters. What assumptions could break this result? Are those changes reasonable?&lt;/p&gt;

&lt;p&gt;If no such changes exist, we can conclude that mission hedging isn’t worth doing in practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Mission hedging vs. impact investing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many philanthropists want to directly do good with their investments. But impact investing often has the exact opposite prescriptions as mission hedging. Does divestment do enough good to damper—or even negate—the benefits of mission hedging? Should we over-weight investments in bad things, or under-weight them? How do we decide?&lt;/p&gt;

&lt;p&gt;Jonathan Harris, founder of the &lt;a href=&quot;https://total-portfolio.org/&quot;&gt;Total Portfolio Project&lt;/a&gt;, has &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3939991&quot;&gt;done some relevant work&lt;/a&gt;&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; on this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Find the specific assets that provide the strongest mission correlation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What can we invest in to mission hedge various causes? Which ones will work best? Or, in cases where &lt;a href=&quot;https://forum.effectivealtruism.org/posts/YApaCr98Q7wSYcaoB/mission-correlation-more-than-just-hedging&quot;&gt;mission leverage&lt;/a&gt; makes more sense than mission hedging, which assets provide the most effective mission leverage? And how can we tell?&lt;/p&gt;

&lt;p&gt;For some causes, like climate change (and maybe farm animal welfare), plenty of data exists, and it’s just a matter of putting it together. For others, there’s not much existing data, so we’d have to get more creative.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Optimization over real-world historical data.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pick some mission target, then get historical data on the growth rate/volatility of that target and find an investment asset that hedged it well. What would the historically optimal portfolio allocation have been?&lt;/p&gt;

&lt;p&gt;Using real-world data alleviates certain concerns with using a model. The model assumes asset prices follow lognormal distributions and maintain consistent correlations over time, but those things aren’t true. Optimizing over historical data would show us how asset prices really behave (or, at least, how they used to behave).&lt;/p&gt;

&lt;p&gt;Getting this data shouldn’t be too hard to do for some causes. For climate change, we could look at historical annual CO2 emissions, market returns, and returns for some plausibly good hedges like oil company stocks or oil futures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Shape of the utility function.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Does the marginal utility of wealth really increase linearly with the mission target? Maybe it increases with some power function of the mission target, or maybe the relationship is more complicated. What if the mission target is a &lt;em&gt;good&lt;/em&gt; thing (see &lt;a href=&quot;https://forum.effectivealtruism.org/posts/5hjcGjsmkD4RPmyRF/mission-correlated-investing-examples-of-mission-hedging-and#Mission_leveraging&quot;&gt;here&lt;/a&gt; for examples)?&lt;/p&gt;

&lt;p&gt;My current thinking on this subject is pretty primitive, so I expect there’s room to come up with something a lot better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Alternative approaches for evaluating mission-correlated investing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Could we look at the value of mission-correlated investing in an entirely different way? What other methods might we use to decide how much of our portfolio (if any) to dedicate to it? Could we decide on a percentage allocation using a qualitative, rather than quantitative, approach?&lt;/p&gt;

&lt;h1 id=&quot;acknowledgments&quot;&gt;Acknowledgments&lt;/h1&gt;

&lt;p&gt;Thank you to Jonathan Harris for providing feedback on drafts of this essay.&lt;/p&gt;

&lt;h1 id=&quot;appendices&quot;&gt;Appendices&lt;/h1&gt;
&lt;h2 id=&quot;appendix-a-mission-leveraging&quot;&gt;Appendix A: Mission leveraging&lt;/h2&gt;

&lt;p&gt;Mission leveraging is the opposite of mission hedging. With mission hedging, we invest in a way that gives us more money when the world gets worse. With mission leveraging, we get more money when the world gets &lt;em&gt;better&lt;/em&gt;, doubling down on our mission instead of hedging it.&lt;/p&gt;

&lt;p&gt;I won’t discuss when to hedge vs. leverage in full generality. (See &lt;a href=&quot;https://forum.effectivealtruism.org/posts/5hjcGjsmkD4RPmyRF/mission-correlated-investing-examples-of-mission-hedging-and&quot;&gt;here&lt;/a&gt; for a discussion plus some real-world examples.) But when should we hedge vs. leverage when we’re working with the &lt;a href=&quot;#the-utility-function&quot;&gt;class of utility functions&lt;/a&gt; discussed in this essay?&lt;/p&gt;

&lt;p&gt;With any utility function that matches my stated assumptions, we always prefer to mission hedge. But there are similar utility functions under which we’d prefer to leverage.&lt;/p&gt;

&lt;p&gt;We can take a utility function with the same form as before:&lt;/p&gt;

&lt;p&gt;\begin{align}
U(W, b) = b \displaystyle\frac{W^{1 - \gamma}}{1 - \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;But instead of \(\gamma &amp;gt; 1\), let \(\gamma &amp;lt; 1\). This changes two of the six stated assumptions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Utility is now bounded below by 0, and has no upper bound.&lt;/li&gt;
  &lt;li&gt;\(b\) is now a good thing: utility is positive instead of negative, which means a larger \(b\) increases utility rather than decreasing it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Because \(b\) is now a good thing, by investing in an asset that’s correlated with \(b\), we are leveraging rather than hedging.&lt;/p&gt;

&lt;p&gt;If we want \(b\) to be a bad thing, we can write the utility function as \(\displaystyle\frac{1}{b} \frac{W^{1 - \gamma}}{1 - \gamma}\). The &lt;a href=&quot;#general-solution&quot;&gt;solution&lt;/a&gt; with this new utility function becomes:&lt;/p&gt;

&lt;p&gt;\begin{align}
\omega_h = -\displaystyle\frac{\sigma_{bh}}{\sigma_h^2 \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;So now, rather than buying the hedge asset, we want to short it. And the higher its covariance with \(b\), the larger our short position should be. In other words, we want to mission leverage.&lt;/p&gt;

&lt;p&gt;In broad terms, when utility of wealth grows quickly (\(\gamma &amp;lt; 1\)), we should leverage. And when utility of wealth grows slowly (\(\gamma &amp;gt; 1\)), we should hedge.&lt;/p&gt;

&lt;p&gt;The exact conditions are:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;When \(\frac{dU}{db}\) is negative, \(b\) is a bad thing. When it’s positive, \(b\) is a good thing.&lt;/li&gt;
  &lt;li&gt;When \(\frac{d^2U}{db dW}\) has the same sign as \(\frac{dU}{db}\), we want to leverage. When it has an opposite sign, we want to hedge.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;appendix-b-why-use-this-approach&quot;&gt;Appendix B: Why use this approach?&lt;/h2&gt;

&lt;p&gt;The model I use in this post is an extension of Harry Markowitz’s &lt;a href=&quot;https://en.wikipedia.org/wiki/Modern_portfolio_theory&quot;&gt;mean-variance optimization model&lt;/a&gt; (see &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-6261.1952.tb01525.x&quot;&gt;Markowitz (1952)&lt;/a&gt;&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;). Markowitz won a Nobel Prize for this model, and it has been explored extensively in academic finance. At the same time, few investing practitioners use it.&lt;/p&gt;

&lt;p&gt;Mean-variance optimization finds the portfolio with the best risk-adjusted return (a.k.a. &lt;a href=&quot;https://www.investopedia.com/terms/s/sharperatio.asp&quot;&gt;Sharpe ratio&lt;/a&gt;) when given a set of assets with known means, standard deviations, and correlations with each other. This approach has some serious downsides:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;If you feed the model historical data for return/volatility/correlation, the model only tells you what portfolio would have been optimal historically. As William Bernstein put it: “If you can predict the optimizer inputs well enough to come close to the future [optimal portfolio], then you don’t need an optimizer in the first place.”&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;The Sharpe-ratio-maximizing portfolio is only optimal if standard deviation fully captures what people mean by “risk” (spoiler: it doesn’t).&lt;/li&gt;
  &lt;li&gt;Often, the resulting portfolio requires using so much leverage that it’s impossible to invest in in practice.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Most practitioners prefer other portfolio construction approaches. A simple equal-weight strategy—where you divide your money evenly across every asset class—works pretty well.&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; However, a few firms (such as &lt;a href=&quot;https://rhsfinancial.com/services/investing/&quot;&gt;RHS Financial&lt;/a&gt;) explicitly use portfolio optimization, but instead of naively inputting historical data, they take a sophisticated approach to generate more reasonable model results.&lt;/p&gt;

&lt;p&gt;So mean-variance optimization doesn’t produce good results (at least if applied naively). Why, then, am I extending this model to mission-correlated investing?&lt;/p&gt;

&lt;p&gt;One reason is that it’s a good starting point—this model is well-understood, and so a simple extension of it is easy to reason about.&lt;/p&gt;

&lt;p&gt;A second reason is that, just as recent research has found more practical variations on the mean-variance approach, so too might we find a modification of my model that fixes the biggest issues. (It might be as simple as taking a published extension to the Markowitz model and incorporating mission-correlated investing.)&lt;/p&gt;

&lt;p&gt;But most importantly, it’s because I don’t see how a non-quantitative approach could work. With ordinary investing, you can simply invest in an equal-weighted portfolio and you won’t go too wrong. Or you can use popular rules of thumb like 60% stocks, 40% bonds. But there’s no fallback approach for allocating to mission-correlated investing. Equal-weighting would tell us to put 50% in mission hedging/leveraging, which seems excessive. And we don’t have any convenient rules of thumb, either. Without any strong guiding heuristics, I believe the best approach is to make a bunch of reasonable(ish) assumptions and use those to explicitly calculate the portfolio that maximizes expected utility.&lt;/p&gt;

&lt;h2 id=&quot;appendix-c-proof-of-analytic-solution&quot;&gt;Appendix C: Proof of analytic solution&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Thanks to Gordon Irlam’s &lt;a href=&quot;https://www.gordoni.com/lifetime_portfolio_selection.pdf&quot;&gt;Lifetime Portfolio Selection: A Simple Derivation&lt;/a&gt;, which provides a simple proof of the asset allocation result from &lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Merton (1969)&lt;/a&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;, and which I found indispensable in proving the result in this section.&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Recall that we defined&lt;/p&gt;

&lt;p&gt;\begin{align}
U(W, b) = b \displaystyle\frac{W^{1 - \gamma}}{1 - \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;For this proof, let’s generalize this to&lt;/p&gt;

&lt;p&gt;\begin{align}
U(W, b) = b^\lambda \displaystyle\frac{W^{1 - \gamma}}{1 - \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\lambda\) determines the rate at which it becomes easier to affect the mission target as the mission target grows.&lt;/p&gt;

&lt;p&gt;Wealth is determined by the proportional allocation to the MVO asset and to the hedge asset and the returns of each. Each variable follows a lognormal distribution.&lt;/p&gt;

&lt;p&gt;A single-variable lognormal distribution is given by \(e^{\mu + \sigma N(0,1)}\) where \(N(0,1)\) is a standard normal distribution (with mean 0 and standard deviation 1).&lt;/p&gt;

&lt;p&gt;\(b\) is a lognormal distribution given by \(\exp(\mu_b + \sigma_b N(0,1))\).&lt;/p&gt;

&lt;p&gt;Our objective is to maximize expected utility with respect to allocations \(\omega_m\) and \(\omega_h\):&lt;/p&gt;

&lt;p&gt;\begin{align}
\arg \max\limits_{\omega_m, \omega_h} E\left[b^\lambda \displaystyle\frac{W^{1 - \gamma}}{1 - \gamma}\right]
\end{align}&lt;/p&gt;

&lt;p&gt;Expanding \(b\) and \(W\), we get&lt;/p&gt;

&lt;p&gt;\begin{align}
\arg \max\limits_{\omega_m, \omega_h} E\left[\exp(\lambda \mu_b + \lambda \sigma_b N_b(0,1)) (\exp(\omega_m \mu_m + \omega_m \sigma_m N_m(0,1) + \omega_h \mu_h + \omega_h \sigma_h N_h(0,1)))^{1 - \gamma}\right]
\end{align}&lt;/p&gt;

&lt;p&gt;where \(N_m(0,1), N_h(0,1), N_b(0,1)\) are all standard normal distributions.&lt;/p&gt;

&lt;p&gt;(ignoring the \(1 - \gamma\) in the denominator, because scaling by a constant does not change the argmax)&lt;/p&gt;

&lt;p&gt;When taking an expected value, we can separate out any independent variables—\(E[X Y] = E[X] E[Y]\). By assumption, \(m\) is independent of both \(b\) and \(h\). So we can separate this into two maximization problems:&lt;/p&gt;

&lt;p&gt;\begin{align}
\arg \max\limits_{\omega_m} E\left[\exp(\omega_m \mu_m + \omega_m \sigma_m N_m(0,1)) \right]
\end{align}&lt;/p&gt;

&lt;p&gt;\begin{align}
\arg \max\limits_{\omega_h} E\left[\exp(\lambda \mu_b + \omega_h \mu_h (1 - \gamma) + \lambda \sigma_b N_b(0,1) + \omega_h \sigma_h (1 - \gamma) N_h(0,1)) \right]
\end{align}&lt;/p&gt;

&lt;p&gt;The first problem is simply the traditional portfolio optimization problem, which has the solution&lt;/p&gt;

&lt;p&gt;\begin{align}
\omega_m = \displaystyle\frac{\mu_m + \frac{1}{2} \sigma_m^2}{\sigma_m^2 \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;If we let \(\alpha = \mu_m + \frac{1}{2} \sigma_m^2\), we can write this as \(\displaystyle\frac{\alpha_m}{\sigma_m^2 \gamma}\). This is convenient in some cases because \(e^\alpha\) is the arithmetic mean of a lognormal distribution.&lt;/p&gt;

&lt;p&gt;The second maximization problem does not have a pre-existing solution (to my knowledge), so let’s solve it.&lt;/p&gt;

&lt;p&gt;If \(N_b(0,1)\) and \(N_h(0,1)\) have correlation \(\rho\), then the part inside the exponent is a sum of dependent normally distributed random variables. The mean of the sum is simply the sum of the means. The standard deviation of the sum equals&lt;/p&gt;

&lt;p&gt;\begin{align}
\sigma_{b + h} = \sqrt{\sigma_b^2 + (\omega_h \sigma_h)^2 + 2 \rho \sigma_b (\omega_h \sigma_h)}
\end{align}&lt;/p&gt;

&lt;p&gt;Note that \(\rho \sigma_b \sigma_h\) is the covariance between \(b\) and \(h\), so we can replace this with \(\sigma_{bh}\).&lt;/p&gt;

&lt;p&gt;The expected value of a lognormal distribution parameterized by \(\mu\) and \(\sigma\) is \(e^{\mu + \sigma^2/2}\). Maximizing this quantity is equivalent to maximizing its logarithm, so we want to find&lt;/p&gt;

&lt;p&gt;\begin{align}
\arg \max\limits_{\omega_h} \left[ \lambda \mu_b + \omega_h \mu_h (1 - \gamma) + \frac{1}{2} \lambda^2 \sigma_b^2 + \frac{1}{2} \omega_h^2 \sigma_h^2 (1 - \gamma)^2 + \omega_h \sigma_{bh} (1 - \gamma) \right]
\end{align}&lt;/p&gt;

&lt;p&gt;At this point, it is convenient to replace \(\mu\) with \(\alpha - \sigma^2/2\) (recall that \(\alpha\) is the log of the arithmetic mean). In an efficient market, an uncorrelated asset such as a mission hedge has \(\alpha = 0\), which allows us to ignore the \(\alpha_h\) term if we want to.&lt;/p&gt;

&lt;p&gt;Setting the derivative to 0,&lt;/p&gt;

&lt;p&gt;\begin{align}
\displaystyle\frac{dE(U)}{d\omega_h} = \alpha_h (1 - \gamma) - \omega_h \sigma_h^2(1 - \gamma) + \omega_h \sigma_h^2 (1 - \gamma)^2 + \lambda \sigma_{bh} (1 - \gamma) = 0
\end{align}&lt;/p&gt;

&lt;p&gt;Solving for \(\omega_h\) and simplifying gives&lt;/p&gt;

&lt;p&gt;\begin{align}
\omega_h = \displaystyle\frac{\lambda \sigma_{bh} + \alpha_h}{\sigma_h^2 \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;And naturally, for \(\lambda = 1, \alpha_h = 0\), this further simplifies to&lt;/p&gt;

&lt;p&gt;\begin{align}
\omega_h = \displaystyle\frac{\sigma_{bh}}{\sigma_h^2 \gamma}
\end{align}&lt;/p&gt;

&lt;p&gt;Jonathan Harris wrote an alternative proof &lt;a href=&quot;https://link.total-portfolio.org/MCpremia&quot;&gt;here&lt;/a&gt;, starting from the model he developed in &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3934090&quot;&gt;A Framework for Investing in Altruism&lt;/a&gt;—which is more general than my model—and plugging in some numbers. His model provides an approximation, not an exact general solution, but the solution happens to be exact in this case.&lt;/p&gt;

&lt;h2 id=&quot;appendix-d-tables-of-optimization-results&quot;&gt;Appendix D: Tables of optimization results&lt;/h2&gt;

&lt;p&gt;Results for \(\sigma_b = 3\%\)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;\(\rho\)&lt;/th&gt;
      &lt;th&gt;\(\gamma\)&lt;/th&gt;
      &lt;th&gt;market&lt;/th&gt;
      &lt;th&gt;hedge&lt;/th&gt;
      &lt;th&gt;ratio&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;0.25&lt;/td&gt;
      &lt;td&gt;1.1&lt;/td&gt;
      &lt;td&gt;2.245&lt;/td&gt;
      &lt;td&gt;0.038&lt;/td&gt;
      &lt;td&gt;1.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;1.5&lt;/td&gt;
      &lt;td&gt;1.646&lt;/td&gt;
      &lt;td&gt;0.028&lt;/td&gt;
      &lt;td&gt;1.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;2.0&lt;/td&gt;
      &lt;td&gt;1.235&lt;/td&gt;
      &lt;td&gt;0.021&lt;/td&gt;
      &lt;td&gt;1.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.5&lt;/td&gt;
      &lt;td&gt;1.1&lt;/td&gt;
      &lt;td&gt;2.245&lt;/td&gt;
      &lt;td&gt;0.076&lt;/td&gt;
      &lt;td&gt;3.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;1.5&lt;/td&gt;
      &lt;td&gt;1.646&lt;/td&gt;
      &lt;td&gt;0.056&lt;/td&gt;
      &lt;td&gt;3.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;2.0&lt;/td&gt;
      &lt;td&gt;1.235&lt;/td&gt;
      &lt;td&gt;0.042&lt;/td&gt;
      &lt;td&gt;3.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;1.1&lt;/td&gt;
      &lt;td&gt;2.245&lt;/td&gt;
      &lt;td&gt;0.136&lt;/td&gt;
      &lt;td&gt;5.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;1.5&lt;/td&gt;
      &lt;td&gt;1.646&lt;/td&gt;
      &lt;td&gt;0.100&lt;/td&gt;
      &lt;td&gt;5.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;2.0&lt;/td&gt;
      &lt;td&gt;1.235&lt;/td&gt;
      &lt;td&gt;0.075&lt;/td&gt;
      &lt;td&gt;5.7%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Results for \(\sigma_b = 20\%, \gamma = 1.5\) (note: optimal absolute market allocation for a given \(\gamma is the same as for\)\sigma_b = 3\%$$)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;\(\rho\)&lt;/th&gt;
      &lt;th&gt;market&lt;/th&gt;
      &lt;th&gt;hedge&lt;/th&gt;
      &lt;th&gt;ratio&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;0.25&lt;/td&gt;
      &lt;td&gt;1.646&lt;/td&gt;
      &lt;td&gt;0.185&lt;/td&gt;
      &lt;td&gt;10.1%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.5&lt;/td&gt;
      &lt;td&gt;1.646&lt;/td&gt;
      &lt;td&gt;0.370&lt;/td&gt;
      &lt;td&gt;18.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;1.646&lt;/td&gt;
      &lt;td&gt;0.667&lt;/td&gt;
      &lt;td&gt;28.8%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Results for \(\sigma_b = 30\%, \gamma = 1.5\)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;\(\rho\)&lt;/th&gt;
      &lt;th&gt;market&lt;/th&gt;
      &lt;th&gt;hedge&lt;/th&gt;
      &lt;th&gt;ratio&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;0.25&lt;/td&gt;
      &lt;td&gt;1.646&lt;/td&gt;
      &lt;td&gt;0.278&lt;/td&gt;
      &lt;td&gt;14.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.5&lt;/td&gt;
      &lt;td&gt;1.646&lt;/td&gt;
      &lt;td&gt;0.556&lt;/td&gt;
      &lt;td&gt;25.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;1.646&lt;/td&gt;
      &lt;td&gt;1.000&lt;/td&gt;
      &lt;td&gt;37.8%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Results for \(\sigma_b = 20\%\), no leverage allowed (\(\omega_m + \omega_h = 1\)), computed by numeric approximation:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;\(\rho\)&lt;/th&gt;
      &lt;th&gt;\(\gamma\)&lt;/th&gt;
      &lt;th&gt;market&lt;/th&gt;
      &lt;th&gt;hedge&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;0.25&lt;/td&gt;
      &lt;td&gt;1.1&lt;/td&gt;
      &lt;td&gt;1.497&lt;/td&gt;
      &lt;td&gt;-0.497&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;1.5&lt;/td&gt;
      &lt;td&gt;1.230&lt;/td&gt;
      &lt;td&gt;-0.230&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;1.048&lt;/td&gt;
      &lt;td&gt;-0.048&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.5&lt;/td&gt;
      &lt;td&gt;1.1&lt;/td&gt;
      &lt;td&gt;1.370&lt;/td&gt;
      &lt;td&gt;-0.370&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;1.5&lt;/td&gt;
      &lt;td&gt;1.138&lt;/td&gt;
      &lt;td&gt;-0.138&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;0.978&lt;/td&gt;
      &lt;td&gt;0.022&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;1.1&lt;/td&gt;
      &lt;td&gt;1.177&lt;/td&gt;
      &lt;td&gt;-0.177&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;1.5&lt;/td&gt;
      &lt;td&gt;0.992&lt;/td&gt;
      &lt;td&gt;0.008&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;0.869&lt;/td&gt;
      &lt;td&gt;0.131&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id=&quot;appendix-e-numeric-approximation-of-the-optimal-solution&quot;&gt;Appendix E: Numeric approximation of the optimal solution&lt;/h2&gt;

&lt;p&gt;It is possible to numerically approximate the optimal allocation for any utility function. In this section, I describe how I did this.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Epistemic status: Before I started working on this problem, I didn’t understand most of the required math, and I picked it up as I went along. So there’s a reasonable chance that I’m making some mistakes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We want to find the utility-maximizing asset allocation, where:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Utility is a two-dimensional function of wealth and the mission target.&lt;/li&gt;
  &lt;li&gt;There are three lognormally-distributed random variables: price of the MVO asset \(m\), price of the hedge asset \(h\), and quantity of the mission target \(b\).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The three variables are parameterized by a length-3 mean vector \(\mu\) and a covariance matrix \(\Sigma\), which together describe a three-dimensional multivariate normal distribution. The random variables we care about—\(m, h, b\)—are defined as the exponentials of three normally-distributed random variables \(x_1, x_2, x_3\), which together form a vector \(x\). That is, \(m = e^{x_1}, h = e^{x_2}, b = e^{x_3}\).&lt;/p&gt;

&lt;p&gt;The probability density function for a three-variable &lt;a href=&quot;https://en.wikipedia.org/wiki/Multivariate_normal_distribution&quot;&gt;multivariate normal distribution&lt;/a&gt; is&lt;/p&gt;

&lt;p&gt;\begin{align}
f(x) = \displaystyle\frac{\exp(-\frac{1}{2}(x - \mu)^T \Sigma^{-1} (x - \mu))}{\sqrt{(2\pi)^3 \det(\Sigma)}}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\Sigma^{-1}\) is the inverse of the covariance matrix \(\Sigma\) and \(\det(\Sigma)\) is its &lt;a href=&quot;https://en.wikipedia.org/wiki/Determinant&quot;&gt;determinant&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Transforming this normal density function into lognormal space according to the &lt;a href=&quot;https://www.cs.ubc.ca/~murphyk/Teaching/Stat406-Spring08/homework/changeOfVariablesHandout.pdf&quot;&gt;multivariate change of variables formula&lt;/a&gt;, letting \(y = [m, h, b] = \exp(x)\), we have&lt;/p&gt;

&lt;p&gt;\begin{align}
f(y) = \displaystyle\frac{\exp(-\frac{1}{2}(\log(y) - \mu)^T \Sigma^{-1} (\log(y) - \mu))}{m h b \sqrt{(2 \pi)^3 \det{\Sigma}}}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\frac{1}{m h b}\) is the determinant of the Jacobian matrix of \(y\). (&lt;a href=&quot;https://math.stackexchange.com/questions/267267/intuitive-proof-of-multivariable-changing-of-variables-formula-jacobian-withou&quot;&gt;This Stack Exchange post&lt;/a&gt; gives an intuitive explanation of why the change-of-variables formula works this way.)&lt;/p&gt;

&lt;p&gt;Let \(\omega_m, \omega_h\) be the asset proportions in the MVO asset and the hedge asset, respectively (recall that these can sum to greater or less than 1, and they can be negative). Wealth is then calculated as \(w = \exp(\omega_m m + \omega_h h)\). The mission target \(b\) is simply the third value of the random vector \(y\).&lt;/p&gt;

&lt;p&gt;Expected utility is given by&lt;/p&gt;

&lt;p&gt;\begin{align}
E[U(W, b)] = \int_{-\infty}^\infty U(W, b) f(y) dy
\end{align}&lt;/p&gt;

&lt;p&gt;Our goal is to maximize this function.&lt;/p&gt;

&lt;p&gt;To solve this problem, I wrote a program to compute expected utility using numerical integration and then perform gradient descent to find the optimal asset proportions.&lt;/p&gt;

&lt;p&gt;My numerical integration uses Richardson’s extrapolation formula&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt; with 12 and 24 trapezoids per dimension (up to 24&lt;sup&gt;3&lt;/sup&gt; = 13,824 four-dimensional trapezoids per integral). According to my tests, this method is accurate to within 0.02% (e.g., if the true optimal allocation is 10%, this method will give an answer between 9.998% and 10.002%).&lt;/p&gt;

&lt;p&gt;I define the trapezoid bounds by first generating evenly-spaced 3D bases for a three-variable i.i.d. standard normal distribution. Then I transform the bounds of these squares according to \(g(x) = e^{\mu + \sigma x}\), where \(\sigma\) is the vector of standard deviations for the three variables in \(x\). Then I use these new bounds given by \(g(x)\) as the base of each trapezoid. This transformation function assumes the three variables are independent, which they’re not, but I found that this method was accurate enough.&lt;/p&gt;

&lt;p&gt;My source code is available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/MissionHedging.hs&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Brigitte Roth Tran (2019). &lt;a href=&quot;https://pubs.aeaweb.org/doi/pdfplus/10.1257/aeri.20180347&quot;&gt;Divest, Disregard, or Double Down? Philanthropic Endowment in Objectionable Firms.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;An allocation of (say) 20% on a risk-adjusted basis means that 20% of the risk of my portfolio comes from mission hedging investments, which is not the same as a 20% allocation. For example, if my normal investments have a 15% standard deviation and my mission hedging investments have a 30% standard deviation, then I should allocate 11% to mission hedging, because 11% * 30% / (11% * 30% + 89% * 15%) = 20%. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, “mean-variance optimal” means that if you apply leverage or mix in cash, this portfolio maximizes the arithmetic mean return for any given variance, or (equivalently) minimizes variance for any given mean return. In realistic conditions, you might not want to invest in the MVO portfolio (e.g., because you don’t want to use leverage), so I’m generalizing the term “MVO” to refer to the optimal portfolio from a self-interested standpoint, even if it’s not technically mean-variance optimal. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If \(f(b) = 1\), this is equivalent to a standard CRRA utility function (plus a constant). &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I spoke to an expert in carbon capture, who suggested that the cost of removing CO2 from the atmosphere scales roughly with the square root of atmospheric concentration. As shown by the formula in &lt;a href=&quot;#appendix-c-proof-of-analytic-solution&quot;&gt;Appendix C&lt;/a&gt;, using a square root function instead of a linear function would halve the optimal allocation to mission hedging. But this expert also suggested that higher atmospheric CO2 would drive innovation on carbon removal, bringing the cost down, possibly resulting in a super-linear cost-effectiveness function. For simplicity, I will stick with a linear function for this essay. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;\(U(W, b)\) is defined as a CRRA utility function of \(W\) scaled linearly by \(b\), and CRRA utility functions are invariant with scale. Changing \(\alpha_b\) changes the scaling, but that doesn’t change the optimal allocation. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Using &lt;a href=&quot;https://datatopics.worldbank.org/world-development-indicators/themes/environment.html&quot;&gt;World Bank data&lt;/a&gt;, I estimated that CO2 emissions growth has an annual volatility of 3%. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I estimated the volatility of AI progress using the Electronic Frontier Foundation’s &lt;a href=&quot;https://www.eff.org/ai/metrics&quot;&gt;AI Progress Measurement&lt;/a&gt; database, which aggregates ML model scores across a variety of benchmarks. I found a standard deviation of growth rate of about 15%. Let’s bump this up to 20% to increase the width of the parameter range. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;\(\sigma_b=20\%, r=0.9, \gamma=2\) gives&lt;/p&gt;

\[\nabla E[U(\text{MVO}, \text{legacy}, \text{hedge})]_{[0, 1, 0]} = [5.43, -19.9, 3.60]\]

      &lt;p&gt;The gradient is largest in the direction of MVO, which means we maximize expected utility by moving in that direction. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;\(\sigma_b=20\%, r=0.9, \gamma=2\) gives&lt;/p&gt;

\[\nabla E[U(\text{MVO}, \text{legacy}, \text{hedge})]_{0.5, 0.5, 0} = [3.35, -6.67, 3.33]\]
      &lt;p&gt;&lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, optimal allocation varies linearly with both correlation and standard deviation of the mission target. But I have greater uncertainty about the standard deviation, and it can cover a wider range of plausible values. &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jonathan Harris (2021). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3939991&quot;&gt;Investing for Impact in General Equilibrium.&lt;/a&gt; Working paper. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Harry Markowitz (1952). &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-6261.1952.tb01525.x&quot;&gt;Portfolio Selection.&lt;/a&gt; &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;William Bernstein (1998). The Intelligent Asset Allocator. Kindle location 1082. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Victor DeMiguel, Lorenzo Garlappi &amp;amp; Raman Uppal (2007). &lt;a href=&quot;http://faculty.london.edu/avmiguel/DeMiguel-Garlappi-Uppal-RFS.pdf&quot;&gt;Optimal Versus Naive Diversification: How Inefficient Is the 1/N Portfolio Strategy?&lt;/a&gt; &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Georg Ch. Pflug, Alois Pichler &amp;amp; David Wozabal (2011). &lt;a href=&quot;https://www.tu-chemnitz.de/mathematik/fima/public/publications/JBF.pdf&quot;&gt;The 1/N investment strategy is optimal under high model ambiguity.&lt;/a&gt; &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Robert Merton (1969). &lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Lifetime Portfolio Selection Under Uncertainty: The Continuous-Time Case.&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I didn’t actually find the analytic solution by proving it. I originally wrote a program to numerically compute the optimal allocation. After messing with this program for a while, I discovered that the outputs appeared to change in predictable ways with the inputs, and I wrote down what I believed to be an analytic solution. Then, already having some confidence in the answer, I attempted to prove it. This provided some useful guidance—my original proof contained some mistakes, which I noticed because the result of my proof did not match the formula that I knew empirically to be correct. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://mathforcollege.com/nm/mws/gen/07int/mws_gen_int_txt_romberg.pdf&quot;&gt;This site&lt;/a&gt; has a good explanation of how it works. In short, if \(F_n(x)\) is the numeric integral of a function \(f(x)\) using \(n\) trapezoids per dimension, then the Richardson estimate is \(F_n(x) + \frac{1}{3} (F_{2n}(x) - F_n(x))\). &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>How I Estimate Future Investment Returns</title>
				<pubDate>Fri, 01 Apr 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/04/01/how_I_estimate_future_investment_returns/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/04/01/how_I_estimate_future_investment_returns/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;To make informed investing decisions, I want to estimate the future expected return of my portfolio. Markets are unpredictable, and future returns will likely significantly deviate from estimates—AQR &lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/2022-Capital-Market-Assumptions-for-Major-Asset-Classes&quot;&gt;believes&lt;/a&gt; there’s a 50% chance that 10-year realized equity returns will differ from their predictions by more than 3% per year. Still, it’s helpful to come up with a median guess.&lt;/p&gt;

&lt;p&gt;In this post, I explain the projections that I use for my own financial planning.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Last updated 2022-06-16.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;market-returns&quot;&gt;Market returns&lt;/h2&gt;

&lt;p&gt;For market returns, I look at three institutional forecasts:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/2022-Capital-Market-Assumptions-for-Major-Asset-Classes&quot;&gt;AQR&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation&quot;&gt;Research Affiliates (RAFI)&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://advisors.vanguard.com/insights/article/marketperspectivesdecember2021&quot;&gt;Vanguard&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;They each use somewhat different methodology:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;AQR uses the &lt;a href=&quot;https://en.wikipedia.org/wiki/Dividend_discount_model&quot;&gt;dividend discount model&lt;/a&gt;, the most standard method of estimating future returns.&lt;/li&gt;
  &lt;li&gt;RAFI uses the discounted dividend model, but also assumes that valuations tend to revert to the mean (like the &lt;a href=&quot;https://awealthofcommonsense.com/2016/09/the-john-bogle-expected-return-formula/&quot;&gt;Bogle expected return formula&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Vanguard uses a &lt;a href=&quot;https://www.vanguard.ca/documents/vanguard-global-capital-markets-model.pdf&quot;&gt;complicated model&lt;/a&gt; that incorporates lots of economic factors.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(If I’d had to guess in advance which of these three firms uses the most complicated model, I definitely wouldn’t have guessed Vanguard.)&lt;/p&gt;

&lt;p&gt;Historical evidence suggests that valuations tend to mean revert, but not reliably. My best-guess estimate would incorporate partial but not complete mean reversion, so I believe it makes sense to take an average of AQR’s and RAFI’s return estimates. I don’t really understand how Vanguard came up with its estimates, but it tends to give numbers in between AQR’s and RAFI’s.&lt;/p&gt;

&lt;p&gt;A table of estimates for the 10-year real geometric returns of various asset classes:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Asset Class&lt;/th&gt;
      &lt;th&gt;AQR&lt;/th&gt;
      &lt;th&gt;RAFI&lt;/th&gt;
      &lt;th&gt;Vanguard&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US equities&lt;/td&gt;
      &lt;td&gt;3.6%&lt;/td&gt;
      &lt;td&gt;-0.7%&lt;/td&gt;
      &lt;td&gt;1.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;developed ex-US equities&lt;/td&gt;
      &lt;td&gt;4.3%&lt;/td&gt;
      &lt;td&gt;4.4%&lt;/td&gt;
      &lt;td&gt;4.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;emerging equities&lt;/td&gt;
      &lt;td&gt;5.3%&lt;/td&gt;
      &lt;td&gt;7.4%&lt;/td&gt;
      &lt;td&gt;3.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US 10-year Treasuries&lt;/td&gt;
      &lt;td&gt;-0.8%&lt;/td&gt;
      &lt;td&gt;0.9%&lt;/td&gt;
      &lt;td&gt;-0.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commodities&lt;/td&gt;
      &lt;td&gt;-1.5%&lt;/td&gt;
      &lt;td&gt;1.4%&lt;/td&gt;
      &lt;td&gt;N/A&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Estimated standard deviations:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Asset Class&lt;/th&gt;
      &lt;th&gt;RAFI&lt;/th&gt;
      &lt;th&gt;Vanguard&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US equities&lt;/td&gt;
      &lt;td&gt;15.2%&lt;/td&gt;
      &lt;td&gt;16.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;developed ex-US equities&lt;/td&gt;
      &lt;td&gt;17.2%&lt;/td&gt;
      &lt;td&gt;18.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;emerging equities&lt;/td&gt;
      &lt;td&gt;20.9%&lt;/td&gt;
      &lt;td&gt;26.8%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US 10-year Treasuries&lt;/td&gt;
      &lt;td&gt;3.3%&lt;/td&gt;
      &lt;td&gt;4.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commodities&lt;/td&gt;
      &lt;td&gt;16.3%&lt;/td&gt;
      &lt;td&gt;N/A&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Taking an approximate average of these, I assume a 2% real return for US equities, 4% for developed ex-US, and 5% for emerging markets.&lt;/p&gt;

&lt;p&gt;I would also like to know the expected return of the global market portfolio.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Vanguard does not provide any such estimate.&lt;/li&gt;
  &lt;li&gt;RAFI forecasts the global market portfolio to have a 1.5% real return with a 9.2% standard deviation.&lt;/li&gt;
  &lt;li&gt;AQR forecasts global 60/40 to earn a 2.0% real return (no standard deviation given), and believes that this return can be increased to 3.0% at the same level of risk by adding a little leverage, increasing the weight to bonds and low-volatility equities, and mixing in commodities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We could also look at historical performance. Meb Faber’s &lt;a href=&quot;https://www.amazon.com/Global-Asset-Allocation-Survey-Strategies-ebook/dp/B00TYY3F3C&quot;&gt;Global Asset Allocation&lt;/a&gt; found that from 1973 to 2013, the global market portfolio earned a real return of 5.4% with an 8.8% standard deviation.&lt;/p&gt;

&lt;p&gt;I approximated the global market portfolio using data from &lt;a href=&quot;https://academic.oup.com/qje/article/134/3/1225/5435538&quot;&gt;The Rate of Return on Everything, 1870–2015&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;. I found that from 1950 to 2015 (the time range over which every country has annual data), global 60/40&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; had a nominal return of 10.3% with a standard deviation of 14.5%.&lt;/p&gt;

&lt;p&gt;Putting all these together, my best-guess forecast is a 3% real return with a 9% standard deviation.&lt;/p&gt;

&lt;h2 id=&quot;factor-premia&quot;&gt;Factor premia&lt;/h2&gt;

&lt;p&gt;I make investments in certain factors that historically predicted asset returns, including value, momentum, and trendfollowing.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/2022-Capital-Market-Assumptions-for-Major-Asset-Classes&quot;&gt;AQR&lt;/a&gt; and &lt;a href=&quot;https://interactive.researchaffiliates.com/smart-beta&quot;&gt;RAFI&lt;/a&gt; provide projected returns for some factors. (See also &lt;a href=&quot;https://www.researchaffiliates.com/content/dam/ra/documents/smart-beta-methodologies.pdf&quot;&gt;RAFI methodology&lt;/a&gt;.)&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;AQR projects a 0.5% excess return over the benchmark for a market portfolio with a single factor tilt, and 1% excess return for a portfolio with multiple factor tilts.&lt;/li&gt;
  &lt;li&gt;AQR projects a Sharpe ratio of 0.7–0.8 for a long/short multi-factor portfolio.&lt;/li&gt;
  &lt;li&gt;RAFI projects a 4–5% excess return for a long-only value strategy and a 1.5–2% excess return for a long-only momentum strategy.&lt;/li&gt;
  &lt;li&gt;Vanguard projects a 1% excess return for a long-only value strategy, but doesn’t offer any other factor projections.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AQR numbers are net of fees and transaction costs. RAFI estimates are net of transaction costs, but don’t account for fees.&lt;/p&gt;

&lt;p&gt;None of them provide as much detail as I’d like on how they came up with these numbers. I do think RAFI overestimates value and underestimates momentum because they believe value looks unusually undervalued right now and momentum looks unusually overvalued. But predicting factor performance based on valuation probably doesn’t work as well as they think it does. (For more on factor timing, see &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Contrarian-Factor-Timing-is-Deceptively-Difficult&quot;&gt;Contrarian Factor Timing Is Deceptively Difficult&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; from AQR.)&lt;/p&gt;

&lt;p&gt;And anyway, the way I invest doesn’t match how AQR and RAFI came up with their estimates. I invest in &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;concentrated&lt;/a&gt;, equal-weighted factor portfolios, which should earn higher returns than the sorts of portfolios AQR and RAFI looked at. Historically, concentrated strategies had 3–4x larger premia than weakly-tilted portfolios. On the other hand, I believe their estimates (especially RAFI’s) are too optimistic. AQR estimates future factor premia by dividing historical premia in half, which I believe is appropriately conservative, but I’m concerned that they underestimate the costs and tail risk of a leveraged long/short portfolio.&lt;/p&gt;

&lt;p&gt;I came up with my own projections for concentrated factor returns by following these steps:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Run a backtest to find the historical factor premium for a comparable portfolio to the one I invest in.&lt;/li&gt;
  &lt;li&gt;Subtract fees and expected transaction costs.&lt;/li&gt;
  &lt;li&gt;Divide the result by two, on the assumption that factors will only work half as well in the future.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Why divide historical factor returns in half?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;We have good reason to expect factors to &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/#will-these-factors-continue-to-work&quot;&gt;continue to work&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;But they might work less well in the future, simply because investment strategies tend to get worse over time.&lt;/li&gt;
  &lt;li&gt;To keep it simple, just cut expected returns in half.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ends up giving factor return estimates that are a little more conservative than AQR or RAFI.&lt;/p&gt;

&lt;p&gt;I get factor exposure through &lt;a href=&quot;https://alphaarchitect.com/focusedfactors/&quot;&gt;the Alpha Architect ETFs&lt;/a&gt;, which I believe are the best on the market for investors like me. They provide backtests of their methodology to 1973. I did my own backtests to 1926 using the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt;, approximating the methodology as closely as I could, and got similar but slightly worse results (which I believe is explained by a weaker methodology, see footnote&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;). According to these backtests, concentrated value and momentum indexes each had an 8% premium before costs. With a 0.5% management fee, and conservatively assuming 1.5% transaction costs,&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; that gives a 6% premium. Then divide this in half to get a 3% expected future premium.&lt;/p&gt;

&lt;p&gt;I use equity trendfollowing to reduce market exposure during downtrends, like what the &lt;a href=&quot;https://www.etf.com/VMOT&quot;&gt;VMOT&lt;/a&gt; ETF does. Historical evidence suggests that this does not affect expected return, but it reduces equity volatility from ~16% to ~13%. (This understates the value of trendfollowing because trendfollowing tends to change an investment’s skewness from negative to positive.) See &lt;a href=&quot;https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=962461&quot;&gt;Faber (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; for a review of trendfollowing across asset classes. I also performed my own backtests over 80+ years of equity, bond, and commodity data from various sources and got similar results.&lt;/p&gt;

&lt;p&gt;In addition, I invest in long/short trendfollowing over commodities and fixed income, similar to what &lt;a href=&quot;https://www.etf.com/KMLM&quot;&gt;KMLM&lt;/a&gt; does. (I don’t invest in KMLM; I have an &lt;a href=&quot;https://en.wikipedia.org/wiki/Separately_managed_account&quot;&gt;SMA&lt;/a&gt; with Alpha Architect where they directly run managed futures, which is more tax- and leverage-efficient.&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; But if I weren’t doing that, I’d invest in KMLM.) The best data on long/short trendfollowing performance comes from &lt;a href=&quot;https://www.trendfollowing.com/whitepaper/Century_Evidence_Trend_Following.pdf&quot;&gt;Hurst et al. (2014)&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;, which found a 100-year historical return of 11.2% net of 2-and-20 fees with a 9.7% standard deviation. I’m reluctant to take even half this return as a future expectation because it just seems implausibly high—I expect trendfollowing to work, but not that well.&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://www.barclayhedge.com/insider/barclay-cta-index&quot;&gt;Barclay CTA Index&lt;/a&gt;, an index of &lt;a href=&quot;https://www.investopedia.com/terms/c/cta.asp&quot;&gt;commodity trading advisors&lt;/a&gt; who mostly use trendfollowing strategies, had an 8.8% return with 16.5% standard deviation from 1980 to 2021.&lt;/p&gt;

&lt;p&gt;In my projections, I assume an aggressive long/short trendfollowing strategy will earn 4% real with a 15% standard deviation, which is about half the historical performance of the Barclay CTA Index.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;(As an aside, trendfollowing strategies perform much better than equities on risk measures that account for the severity of drawdowns, such as the &lt;a href=&quot;https://tangotools.com/ui/ui.htm&quot;&gt;Ulcer Index&lt;/a&gt;. So a trendfollowing fund with a 15% standard deviation feels much more pleasant to invest in than an equity fund with the same volatility.)&lt;/p&gt;

&lt;p&gt;For volatility projections, there’s no strong reason to expect standard deviation to change over time—it might go up or down, but neither direction looks more likely than the other. So I’ll simply assume that historical volatility continues.&lt;/p&gt;

&lt;p&gt;In summary, I expect &lt;a href=&quot;https://www.etf.com/VMOT&quot;&gt;VMOT&lt;/a&gt; to earn a 6% real return with a 13% standard deviation, for a Sharpe ratio of 0.6.&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; I expect VMOT + bond/commodity trendfollowing to do somewhat better than this—maybe 5% real with 11% standard deviation.&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; Remember that, even if this projection is exactly correct ex ante (which it isn’t), the true number will probably be significantly higher or lower.&lt;/p&gt;

&lt;h1 id=&quot;changelog&quot;&gt;Changelog&lt;/h1&gt;

&lt;p&gt;2022-05-31: Change projected trendfollowing return to incorporate Barclay CTA Index.&lt;/p&gt;

&lt;p&gt;2022-06-16: Minor wording improvement.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Òscar Jordà, Katharina Knoll, Dmitry Kuvshinov, Moritz Schularick &amp;amp; Alan M Taylor (2019). &lt;a href=&quot;https://academic.oup.com/qje/article/134/3/1225/5435538&quot;&gt;The Rate of Return on Everything, 1870–2015&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;60/40 is supposed to be market cap weighted, but I weighted by GDP instead because the data set doesn’t include market cap. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Cliff Asness, Swati Chandra, Antti Ilmanen &amp;amp; Ronen Israel (2017). &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Contrarian-Factor-Timing-is-Deceptively-Difficult&quot;&gt;Contrarian Factor Timing Is Deceptively Difficult.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The Alpha Architect value and momentum ETFs mainly focus on the value and momentum factors (as the names suggest), but they also tilt toward the quality and low-volatility factors, which also show robust predictive power, although not as much as value or momentum. Those additional tilts should increase the excess risk-adjusted return. But the Ken French Data Library does not have the data I’d need to test those tilts. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This is comparable to AQR’s estimated transaction costs based on their own live training data,&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; but (1) AQR (during the sample period) managed about 100x more money than Alpha Architect does and (2) AQR’s strategies rebalanced monthly, and the Alpha Architect funds only rebalance every 3–6 months.&lt;/p&gt;

      &lt;p&gt;AQR found a realized cost of 0.20% per individual trade. At a typical turnover of 50% per 6 months for value or 50% per 3 months for momentum, that implies an annual trading cost of 0.40% for value and 0.80% for momentum. A significantly smaller firm could probably achieve lower trading costs. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Meb Faber (2013). &lt;a href=&quot;https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=962461&quot;&gt;A Quantitative Approach to Tactical Asset Allocation.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;You can get something like 10:1 leverage for cheap by buying futures collateralized by your equity holdings (although I don’t have anywhere close to 10:1 leverage). To leverage an ETF, you have to use margin, which is a bit more expensive, and also wasteful because managed futures ETFs hold a lot of cash on their balance sheets, so you’re mostly just paying to get leverage on cash. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Brian Hurst, Yao Hua Ooi &amp;amp; Lasse H. Pedersen (2014). &lt;a href=&quot;https://www.trendfollowing.com/whitepaper/Century_Evidence_Trend_Following.pdf&quot;&gt;A Century of Evidence on Trend-Following Investing.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I compared AQR’s &lt;a href=&quot;https://www.aqr.com/Insights/Datasets/Time-Series-Momentum-Factors-Monthly&quot;&gt;trendfollowing index&lt;/a&gt; to the performance of one of AQR’s actual trendfollowing funds, &lt;a href=&quot;https://funds.aqr.com/funds/alternatives/aqr-managed-futures-strategy-fund/aqmix&quot;&gt;AQMIX&lt;/a&gt;. The index performed better by about 3 percentage points per year from 2010 to 2019. I’m not sure why—the performance difference varies a lot from year to year, which suggests that it’s not (entirely) due to transaction costs (or else we’d see consistent underperformance by a ~fixed amount). &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I started using the 4% return / 15% volatility projection before I looked up the historical returns of the Barclay CTA Index, so it’s somewhat coincidental that that’s about half the historical return of the index. I originally came up with 4% / 15% by taking the AQR index and applying some heavy discounts. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The Sharpe ratio is the return in excess of cash divided by standard deviation. Right now, the risk-free rate is lower than inflation, so the excess return is greater than the real return by 1 to 2 percentage points. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;A backtest of this strategy, using Ken French factor data and AQR trendfollowing data, found a real return of 10% with an 11% standard deviation net of estimated fees and trading costs. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Andrea Frazzini, Ronen Israel &amp;amp; Tobias J. Moskowitz (2014). &lt;a href=&quot;https://www.aqr.com/Insights/Research/Working-Paper/Trading-Costs-of-Asset-Pricing-Anomalies&quot;&gt;Trading Costs of Asset Pricing Anomalies.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Can Good Writing Be Taught?</title>
				<pubDate>Tue, 22 Mar 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/03/22/can_good_writing_be_taught/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/03/22/can_good_writing_be_taught/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Epistemic status: Highly speculative; unburdened by any meaningful supporting evidence.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I’ve written something like 200 essays for school. Writing those essays did not teach me how to write. Writing for fun taught me how to write.&lt;/p&gt;

&lt;p&gt;When I was in high school, I used to complain that the essays I was required to write were both boring and unhelpful, and I’d learn more by writing essays about whatever I wanted. But if my teachers had let students write whatever they wanted, I don’t think most of them would have gotten very far. I don’t think I would have gotten very far, either. There’s a big difference between&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;me: I have an idea! I will write about it!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;versus&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;teacher: Please have an idea and write about it.&lt;/p&gt;

  &lt;p&gt;me: What should I write about? I dunno, I guess I could write about X, I can probably force myself to come up with something to say about it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead of writing something detached from ordinary life, like literary analysis, should high schoolers be taught to write something relevant, like emails?&lt;/p&gt;

&lt;p&gt;In fact, I &lt;em&gt;was&lt;/em&gt; taught how to write emails in high school (although that was only a small % of what we did), and the teaching was counterproductive. The way my teachers taught me to write emails was significantly wrong, and probably would have hindered my career if I had listened. (As a basic example, they said to always start an email with “Dear [name]”. Nobody starts emails that way in real life.) All the people with jobs who write emails somehow managed to un-learn the anti-lessons that they were taught.&lt;/p&gt;

&lt;p&gt;But even if my teachers had taught me how to write emails correctly, it wouldn’t have mattered. If I have to slog through a purposeless assignment that I don’t care about, anything I learn from it doesn’t stick. I only learn from doing things if I’m doing them for a reason.&lt;/p&gt;

&lt;p&gt;In conclusion, it’s impossible to force someone to learn good writing. They have to want to write.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Existential Risk Reduction Is Naive (And That's a Good Thing)</title>
				<pubDate>Fri, 18 Mar 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/03/18/x-risk_is_naive/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/03/18/x-risk_is_naive/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;I see many people criticize existential risk reduction as naive or arrogant. “What, you think you can save the world?”&lt;/p&gt;

&lt;p&gt;I’m not going to dispute this. Yes, it’s naive and arrogant, and that’s a good thing.&lt;/p&gt;

&lt;p&gt;There are countless movies about saving the world. Lots of people fantasize about saving the world (or, at least, my friends and I did when we were kids, and I still do). Ask any five-year-old child, and they can tell you that saving the world is awesome. But it takes a particularly &lt;a href=&quot;https://www.lesswrong.com/posts/9kcTNWopvXFncXgPy/intellectual-hipsters-and-meta-contrarianism&quot;&gt;subtle and clever mind&lt;/a&gt; to understand that actually, trying to save the world is a silly waste of time.&lt;/p&gt;

&lt;p&gt;But actually, the five-year old was correct all along. Saving the world is, in fact, awesome! We should do it!&lt;/p&gt;

&lt;p&gt;The mature, adult response is that you can’t save the world, and you should be content with contributing to society in your own small way. I could make some clever argument about scope sensitivity or universalist morality or something, but I don’t need to. You already know that saving the world is awesome. Everybody knows it, they’ve just forgotten.&lt;/p&gt;

&lt;p&gt;Climate change is the only mainstream cause that at least has a plausible case for saving the world. And indeed some climate change activists think in those terms. Even though I believe it’s unlikely that mitigating climate change can save the world, it’s still admirable to try. I would like to see more people try. Ask yourself: What could destroy the world, and how do we stop that from happening?&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Altruistic Investors Care About Other Altruists' Portfolios</title>
				<pubDate>Fri, 18 Mar 2022 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2022/03/18/altruistic_investors_care_about_aggregate_altruistic_portfolio/</link>
				<guid isPermaLink="true">http://mdickens.me/2022/03/18/altruistic_investors_care_about_aggregate_altruistic_portfolio/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;Confidence&lt;/a&gt;: Highly likely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In some sense, altruists and traditional investors have the same investing goals—they want to own the portfolio with the best balance of return and risk. But self-interested people only care about their own portfolios. If you’re a philanthropist, you also care about other (value-aligned) philanthropists’ portfolios.&lt;/p&gt;

&lt;p&gt;When the market goes up, you have more money, and you can donate more to charity. But other altruists also have more money, and they can donate more to charity, so your money isn’t as valuable. Conversely, when markets go down, you have less money to donate at the exact time when charities need funding the most.&lt;/p&gt;

&lt;p&gt;That means you should not (necessarily) invest your money in the best overall portfolio. Instead, you should use your investments to move the pool of altruistic money in the direction of optimal.&lt;/p&gt;

&lt;p&gt;An illustration:&lt;/p&gt;

&lt;p&gt;Alice and Bob both donate to the Against Malaria Foundation (AMF). (For simplicity, let’s say they’re the only two donors.) AMF has diminishing marginal utility of money—once it distributes malaria nets in all the best places, the next round of nets won’t save quite as many lives. So Alice and Bob prefer to invest in a way that will earn good returns but without too much risk. Ideally, they’d both hold something like the total world stock market.&lt;/p&gt;

&lt;p&gt;Bob lives in the United States, and he invests all his money in US stocks. Alice could simply buy the global stock portfolio, which is roughly 50% US stocks and 50% international stocks. But that would put their aggregate portfolio at 75% US stocks, 25% international stocks (assuming Alice and Bob have the same amount of money). So AMF is being funded by an investment portfolio that’s overweighted toward US stocks, which adds risk without any reward to compensate.&lt;/p&gt;

&lt;p&gt;Alice can fix this by investing her entire portfolio in non-US stocks. Now the aggregate portfolio of AMF donors is 50% US stocks, 50% international stocks, just as it should be. It wouldn’t make sense for someone to hold 0% US stocks in their personal retirement portfolio, but this strategy works for Alice because she’s a philanthropist.&lt;/p&gt;

&lt;p&gt;(Of course, Alice could also talk to Bob and persuade him to diversify his investments, which might be an even better idea!)&lt;/p&gt;

&lt;p&gt;Now, I’m not trying to say that the global stock market is the best investment, or that Alice did the exact right thing in this scenario. This is just an illustration of a broader point: for an individual altruistic donor, the best investment portfolio &lt;em&gt;on the margin&lt;/em&gt; might not be the same thing as the best &lt;em&gt;overall&lt;/em&gt; portfolio. And altruists should pick the portfolio that’s best on the margin.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(I wrote this post to provide an easy reference for this concept. The concept is not original to me—I originally heard it from Paul Christiano’s &lt;a href=&quot;https://rationalaltruist.com/2013/02/28/risk-aversion-and-investment-for-altruists/&quot;&gt;Risk aversion and investment (for altruists)&lt;/a&gt;.)&lt;/em&gt;&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Should Earners-to-Give Work at Startups Instead of Big Companies?</title>
				<pubDate>Fri, 12 Nov 2021 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2021/11/12/ea_work_at_startups/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/11/12/ea_work_at_startups/</guid>
                <description>
                  
                  
                  
                  &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Likely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/kGbHxYhfqttQZx2QD/should-earners-to-give-work-at-startups-instead-of-big&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Effective altruist earners-to-give might be able to donate more money if, instead of working at big companies for high salaries, they work at startups and get paid in equity. Startups are riskier than big companies, but EAs care less about risk than most people.&lt;/p&gt;

&lt;p&gt;Working at a startup is easier than starting one. It doesn’t pay as well, but based on my research, it looks like EA startup employees can earn more than big company employees in expectation.&lt;/p&gt;

&lt;p&gt;Does the optimal EA investment portfolio include a significant allocation to startups? To answer that question, I estimated the expected return and risk of startups by adding up the &lt;a href=&quot;#what-factors-determine-the-expected-return-of-startups&quot;&gt;following considerations&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Find a baseline of startup performance by looking at historical data on VC firm returns.&lt;/li&gt;
  &lt;li&gt;VC performance is somewhat persistent. EAs can beat the average by working at startups that the top VC firms invest in.&lt;/li&gt;
  &lt;li&gt;Startup employees get worse equity terms than VCs, but they also don’t have to pay management fees, and they get &lt;a href=&quot;https://www.benkuhn.net/optopt/&quot;&gt;meta-options&lt;/a&gt;. Overall, employees come out looking better than VCs.&lt;/li&gt;
  &lt;li&gt;Current market conditions suggest that future performance will be worse than past performance.&lt;/li&gt;
  &lt;li&gt;Startups are much riskier than publicly-traded stocks, and the startup market is moderately correlated with stocks (r=0.7).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All things considered, my best guess is that more earners-to-give should consider working at startups.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#framing-the-problem&quot; id=&quot;markdown-toc-framing-the-problem&quot;&gt;Framing the problem&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#the-right-question&quot; id=&quot;markdown-toc-the-right-question&quot;&gt;The right question&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#modeling-the-solution&quot; id=&quot;markdown-toc-modeling-the-solution&quot;&gt;Modeling the solution&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-factors-determine-the-expected-return-of-startups&quot; id=&quot;markdown-toc-what-factors-determine-the-expected-return-of-startups&quot;&gt;What factors determine the expected return of startups?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#returns-for-vc-firms&quot; id=&quot;markdown-toc-returns-for-vc-firms&quot;&gt;Returns for VC firms&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#returns-for-employees&quot; id=&quot;markdown-toc-returns-for-employees&quot;&gt;Returns for employees&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#forecasting-future-returns&quot; id=&quot;markdown-toc-forecasting-future-returns&quot;&gt;Forecasting future returns&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#giving-now-vs-later&quot; id=&quot;markdown-toc-giving-now-vs-later&quot;&gt;Giving now vs. later&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#putting-together-the-expected-return&quot; id=&quot;markdown-toc-putting-together-the-expected-return&quot;&gt;Putting together the expected return&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#risk-and-correlation-of-startups&quot; id=&quot;markdown-toc-risk-and-correlation-of-startups&quot;&gt;Risk and correlation of startups&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#leverage&quot; id=&quot;markdown-toc-leverage&quot;&gt;Leverage&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#startups-vs-public-equities&quot; id=&quot;markdown-toc-startups-vs-public-equities&quot;&gt;Startups vs. public equities&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#startups-vs-an-optimized-public-investment-portfolio&quot; id=&quot;markdown-toc-startups-vs-an-optimized-public-investment-portfolio&quot;&gt;Startups vs. an optimized public investment portfolio&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#alternative-predictionless-approach&quot; id=&quot;markdown-toc-alternative-predictionless-approach&quot;&gt;Alternative: Predictionless approach&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#alternative-models-are-bad-what-if-we-dont-use-a-model&quot; id=&quot;markdown-toc-alternative-models-are-bad-what-if-we-dont-use-a-model&quot;&gt;Alternative: Models are bad. What if we don’t use a model?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#practical-details&quot; id=&quot;markdown-toc-practical-details&quot;&gt;Practical details&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#areas-for-further-research&quot; id=&quot;markdown-toc-areas-for-further-research&quot;&gt;Areas for further research&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgements&quot; id=&quot;markdown-toc-acknowledgements&quot;&gt;Acknowledgements&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#changelog&quot; id=&quot;markdown-toc-changelog&quot;&gt;Changelog&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-a-startups-for-founders-and-investors&quot; id=&quot;markdown-toc-appendix-a-startups-for-founders-and-investors&quot;&gt;Appendix A: Startups for founders and investors&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-b-some-important-tangents&quot; id=&quot;markdown-toc-appendix-b-some-important-tangents&quot;&gt;Appendix B: Some important tangents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;framing-the-problem&quot;&gt;Framing the problem&lt;/h1&gt;

&lt;p&gt;Suppose you’re an effective altruist and you want to donate as much money as possible. Perhaps you’ve heard the arguments that &lt;a href=&quot;https://80000hours.org/2012/01/salary-or-startup-how-do-gooders-can-gain-more-from-risky-careers/&quot;&gt;EAs should start startups&lt;/a&gt;. Starting a startup is a lot of work and requires special skills, so you’d rather not. Maybe you’d like to invest in venture capital, but the really good VC firms won’t accept your money. However, you wouldn’t mind &lt;em&gt;working&lt;/em&gt; at a startup. You could also work at a big company that pays a high salary. Which should you choose?&lt;/p&gt;

&lt;p&gt;This is how I would think about the problem:&lt;/p&gt;

&lt;p&gt;Say a startup offers you an equity package that’s worth $X per year at the current valuation. At the same time, a big company offers you a salary that’s $X higher than your salary would be at the startup. Both compensation packages have the same face value.&lt;/p&gt;

&lt;p&gt;If you work at the startup, you get &lt;code&gt;$X&lt;/code&gt; per year of equity. Some number of years later, the startup might go public or get acquired, at which point you can sell your equity for &lt;code&gt;$Y&lt;/code&gt;. Over that time, your equity earned a return of &lt;code&gt;Y/X&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you work at the big company, you could invest your extra &lt;code&gt;$X&lt;/code&gt; salary in the stock market. Will that investment earn a higher or lower return than the startup equity? Whichever you expect to earn a higher return is the one you should pick. (Well, that’s not really true. Read the next section to find out why not.)&lt;/p&gt;

&lt;h1 id=&quot;the-right-question&quot;&gt;The right question&lt;/h1&gt;

&lt;p&gt;Which do we expect to earn a higher return, startups or the public stock market?&lt;/p&gt;

&lt;p&gt;We don’t have good data on the historical performance of startup employees. We do have data on VC returns, so we can use that.&lt;/p&gt;

&lt;p&gt;“Have VCs historically outperformed the public market?” is the wrong question, because some VCs consistently outperform the average.&lt;/p&gt;

&lt;p&gt;“Have top VCs historically outperformed the public market?” is the wrong question, because future expected performance probably isn’t the same as past performance.&lt;/p&gt;

&lt;p&gt;“Can we expect top VCs to outperform the public market?” is the wrong question, because startup employees don’t earn the same returns as VCs. Employee equity has a worse liquidation preference than VC equity, but investors in VC firms have to pay fund fees, which employees don’t. And employees have the &lt;a href=&quot;https://www.benkuhn.net/optopt/&quot;&gt;meta-option&lt;/a&gt; to keep vesting when their company does well, or to quit when it does poorly. We can use these considerations to estimate the value of employee equity compared to VC equity.&lt;/p&gt;

&lt;p&gt;“Can we expect startup employees to outperform the public market?” is the wrong question, because we need to consider &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;leverage&lt;/a&gt;. Investors in public markets can use leverage to increase their risk and expected return, but startup employees can’t.&lt;/p&gt;

&lt;p&gt;“Can we expect startup employees to outperform the leveraged public market?” is the wrong question, because an effective altruist’s goal isn’t to maximize their own portfolio return, it’s to maximize the expected utility of the overall EA portfolio. If no other EAs work at the startup where you choose to work, then you’re adding better diversification than if you invest in the public market.&lt;/p&gt;

&lt;p&gt;“Do startup employees contribute more expected utility to the EA portfolio than if they invested in the leveraged public market?” is the wrong question, because if they did work at a big company and invest their salary, they might be able to invest in something better than the broad market. For example, I have &lt;a href=&quot;https://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints/&quot;&gt;previously discussed&lt;/a&gt; investing in concentrated value/momentum/trend portfolios, and I made a rough attempt to calculate the expected utility of doing so. For us to prefer to become startup employees, startups would have to look better than the best possible public investment (whether that’s value/momentum/trend or something else).&lt;/p&gt;

&lt;p&gt;“Do startup employees contribute more expected utility to the EA portfolio than if they invested in the optimal set of public investments?” is more or less the right question.&lt;/p&gt;

&lt;h1 id=&quot;modeling-the-solution&quot;&gt;Modeling the solution&lt;/h1&gt;

&lt;p&gt;Do startup employees contribute more expected utility to the EA portfolio than if they invested in the optimal set of public investments?&lt;/p&gt;

&lt;p&gt;The answer to this depends on two things:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;How do we model the answer?&lt;/li&gt;
  &lt;li&gt;What values should we use for the model inputs?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;#2 is hard. #1 is sort of hard, but luckily, it’s already a solved problem. I described an applicable model in &lt;a href=&quot;https://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints/&quot;&gt;Asset Allocation and Leverage for Altruists with Constraints&lt;/a&gt;. In short, we set up a mean-variance optimization problem where we assume 99% of the capital is controlled by other people, and we can decide how to allocate the remaining 1%. Suppose we can allocate between three investment choices:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A typical investment portfolio, such as a stock market index fund&lt;/li&gt;
  &lt;li&gt;The optimal (ex ante) public investment portfolio&lt;/li&gt;
  &lt;li&gt;Startups&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The simplest method is to assume we should put all our money into just one choice. What is the overall expected utility if we put our money into choice 1, choice 2, or choice 3?&lt;/p&gt;

&lt;p&gt;If we tell our model the expected returns, standard deviations, correlations between these three portfolios, and a utility function, then the model will spit out the expected utility of each choice.&lt;/p&gt;

&lt;p&gt;Let’s say the average EA investment portfolio equals the global equity market, which is at least sort of correct. What’s the expected return and standard deviation of global equities?&lt;/p&gt;

&lt;p&gt;We have no idea how equities will perform in the short run. But in the long run, the market’s return is &lt;a href=&quot;https://awealthofcommonsense.com/2016/09/the-john-bogle-expected-return-formula/&quot;&gt;somewhat predictable&lt;/a&gt;. And over long time horizons, volatility stays pretty consistent, so we can simply assume the future standard deviation of global equities equals the historical standard deviation.&lt;/p&gt;

&lt;p&gt;Similarly, we can approximate the future standard deviation of startups, and their correlation with global equities, by taking the historical standard deviation and correlation and assuming they will stay the same.&lt;/p&gt;

&lt;p&gt;The most difficult input variable is the expected return of startups.&lt;/p&gt;

&lt;h1 id=&quot;what-factors-determine-the-expected-return-of-startups&quot;&gt;What factors determine the expected return of startups?&lt;/h1&gt;

&lt;p&gt;How to estimate the expected return of startups for employees:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Start with some index of VC returns, such as the &lt;a href=&quot;https://www.cambridgeassociates.com/cmb_benchmark_labels/us-venture-capital/&quot;&gt;Cambridge Associates Venture Capital Index&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;These indexes usually provide returns net of fees. VC investors have to pay fees, but startup employees don’t. Add &lt;a href=&quot;https://www.investopedia.com/terms/t/two_and_twenty.asp&quot;&gt;2-and-20 fees&lt;/a&gt; back in to get the gross return.&lt;/li&gt;
  &lt;li&gt;Unlike with public market investors, VC firms that beat the market in the past tend to continue to beat the market. Employees can choose to work at startups with funding from top VCs. Add a premium to the expected return to account for this.&lt;/li&gt;
  &lt;li&gt;Maybe EAs can pick startups better than top VC firms. Possibly add a premium.&lt;/li&gt;
  &lt;li&gt;Employees get worse equity terms than VCs, so subtract some discount to account for this.&lt;/li&gt;
  &lt;li&gt;Startup employees get &lt;a href=&quot;https://www.benkuhn.net/optopt/&quot;&gt;meta-options&lt;/a&gt;, which VCs don’t get. Add an appropriate premium.&lt;/li&gt;
  &lt;li&gt;Use the current market environment to forecast how future returns will look compared to past returns. (This is the sketchiest step, so we might skip this and just assume future returns equal historical returns. But I think we don’t want to skip this because we don’t even really know what historical returns were—more on that later.)&lt;/li&gt;
  &lt;li&gt;Giving now might be better than giving later. If so, that means we shouldn’t compare startups to public investments because public investments aren’t the best thing to do with money. Instead, we should compare startup equity to money donated now.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the next four subsections, let’s break down each of these steps.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;#returns-for-vc-firms&quot;&gt;Returns for VC firms&lt;/a&gt; covers steps 1–3;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#returns-for-eas&quot;&gt;Returns for EAs&lt;/a&gt; covers steps 4–6;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#forecasting-future-returns&quot;&gt;Forecasting future returns&lt;/a&gt; covers step 7; and&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#giving-now-vs-later&quot;&gt;Giving now vs. later&lt;/a&gt; covers step 8.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;returns-for-vc-firms&quot;&gt;Returns for VC firms&lt;/h3&gt;

&lt;p&gt;I used &lt;a href=&quot;https://www.cambridgeassociates.com/wp-content/uploads/2020/07/WEB-2020-Q1-USVC-Benchmark-Book.pdf&quot;&gt;Cambridge Associates’ Venture Capital Index&lt;/a&gt; 2020 report to find the aggregate historical return of VC firms from 1995 to 2018&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;. According to this data set, VCs had a geometric mean return of 13.1% with a standard deviation of 18.9%. That’s a good starting point.&lt;/p&gt;

&lt;p&gt;As the saying goes, a person with one clock always knows what time it is. Someone with two clocks is never quite sure. Cambridge Associates’ &lt;a href=&quot;https://www.cambridgeassociates.com/wp-content/uploads/2018/05/WEB-2017-Q4-USVC-Benchmark-Book.pdf&quot;&gt;2017 report&lt;/a&gt; has data from 1988 to 2016, which gives a geometric mean return of 14.9% and a standard deviation of 17.2%.&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;According to these data sets, VC experienced several regimes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Strong performance 1988–2000, and especially 1998–2000 at the peak of the tech bubble.&lt;/li&gt;
  &lt;li&gt;Very bad returns 2001–2003.&lt;/li&gt;
  &lt;li&gt;Mediocre returns 2004–2013, with a mix of good years and bad years.&lt;/li&gt;
  &lt;li&gt;Strong returns 2014–2020.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The “historical performance of VC” substantially changes depending on which time period you look at. And we don’t know what sort of regime will come next.&lt;/p&gt;

&lt;p&gt;Another problem: Other VC return databases give entirely different numbers. For example, I could have used the VentureXpert database, which some (e.g., &lt;a href=&quot;http://www.vernimmen.com/ftp/KOCH_S_Research_paper.pdf&quot;&gt;Koch (2014)&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;) claim is more accurate. (I used Cambridge Associates purely out of convenience.) Cambridge Associates tends to give higher VC returns than other databases (by a couple percentage points).&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;I will use the 2020 Cambridge Associates report as a starting point. Just be aware that the startup market will likely behave very differently in the future.&lt;/p&gt;

&lt;p&gt;Now let’s convert net returns to gross returns. If we assume VCs usually charge 2-and-20 fees, this step is pretty easy. Using the Cambridge Associates 1995–2018 data, we find a gross historical return of 18.1% with a 23.5% standard deviation.&lt;/p&gt;

&lt;p&gt;Some people (especially VCs) like to talk about how the “top quartile” of VC firms persistently beat the market. This is true, but potentially misleading. VC firms who beat the market in year N are more likely than chance to beat the market in year N+1. But top-quartile firms are by no means guaranteed to stay in the top quartile.&lt;/p&gt;

&lt;p&gt;How strongly do top-VC returns persist? Data from &lt;a href=&quot;https://bfi.uchicago.edu/wp-content/uploads/2020/11/BFI_WP_2020167.pdf&quot;&gt;Harris et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; (1984–2014) is presented in the table below&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;The table uses these terms:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;IRR = internal rate of return, or the annual return achieved by investors.&lt;/li&gt;
  &lt;li&gt;PME = public market equivalent, or the total return of VCs relative to the S&amp;amp;P 500 (see &lt;a href=&quot;http://web.mit.edu/aschoar/www/KaplanSchoar2005.pdf&quot;&gt;Kaplan &amp;amp; Schoar (2005)&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;). For example, if a VC fund earns a total return of 60% and the S&amp;amp;P earns 50% in the same period, then the PME is 60% / 50% = 1.2.&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;IRR&lt;/th&gt;
      &lt;th&gt;PME&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Average VC&lt;/td&gt;
      &lt;td&gt;14.8%&lt;/td&gt;
      &lt;td&gt;1.22&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Top-quartile VC, backward-looking&lt;/td&gt;
      &lt;td&gt;45.3%&lt;/td&gt;
      &lt;td&gt;2.60&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Top-quartile VC, forward-looking&lt;/td&gt;
      &lt;td&gt;26.3%&lt;/td&gt;
      &lt;td&gt;1.70&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Results for VC firms 2001–2014:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;IRR&lt;/th&gt;
      &lt;th&gt;PME&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Average VC&lt;/td&gt;
      &lt;td&gt;10.4%&lt;/td&gt;
      &lt;td&gt;(+)&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Top-quartile VC, backward-looking&lt;/td&gt;
      &lt;td&gt;30.0%&lt;/td&gt;
      &lt;td&gt;2.11&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Top-quartile VC, forward-looking&lt;/td&gt;
      &lt;td&gt;14.7%&lt;/td&gt;
      &lt;td&gt;1.20&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(+) This figure is not provided by Harris et al.&lt;/p&gt;

&lt;p&gt;In the full sample (1984–2014), top-quartile VCs retained about half of their outperformance out of sample, although they lost most of their relative outperformance (compared to the S&amp;amp;P 500). In the post-2001 sample, they lost most of their outperformance both in relative and absolute terms, but still showed nonzero persistence.&lt;/p&gt;

&lt;p&gt;Harris et al. also did a regression analysis, and found that across all VC firms, one third of PME outperformance persisted.&lt;/p&gt;

&lt;p&gt;(Note: Harris et al. used data from &lt;a href=&quot;https://www.burgiss.com/&quot;&gt;Burgiss&lt;/a&gt;, yet another source for VC returns.)&lt;/p&gt;

&lt;h3 id=&quot;returns-for-employees&quot;&gt;Returns for employees&lt;/h3&gt;

&lt;p&gt;Almost all startups give preferred shares to VCs and common shares to employees. Normally, preferred shares get a 1x &lt;a href=&quot;https://www.investopedia.com/terms/l/liquidation-preference.asp&quot;&gt;liquidation preference&lt;/a&gt;. That means if the company exits, VCs are guaranteed to get back at least the money they put in before employees get anything. This makes employee equity worth less than it appears.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A startup raises funding at a $100 million valuation. VCs have $20 million of preferred shares; founders and employees have $80 million of common shares.&lt;/li&gt;
  &lt;li&gt;Later, the startup is acquired for $50 million.&lt;/li&gt;
  &lt;li&gt;VCs get back their $20 million. That leaves just $30 million to split among the common shareholders. The valuation went down by 50%, but employees lost 62.5% of their equity value.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is basically standard practice. Some startups also give special advantages to VCs. There are lots of ways they can do this, such as:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;a liquidation preference that’s higher than 1x (e.g., a 2x preference guarantees that VCs get to double their money before employees get anything)&lt;/li&gt;
  &lt;li&gt;a &lt;a href=&quot;https://www.investopedia.com/terms/f/fullratchet.asp&quot;&gt;ratchet&lt;/a&gt;, which gives VCs protection against dilution at the expense of employees&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These sorts of conditions are really bad for startup employees. You might just want to avoid any startup that offers terms like these. (If you work at a startup with no sketchy terms, and they raise a new round of funding that introduces sketchy terms, that alone might be enough reason to start looking for a new job.) For more on what to watch out for, read Ben Kuhn’s &lt;a href=&quot;https://www.benkuhn.net/terms/&quot;&gt;How bad are fundraising terms?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even without any bad terms, employee stock options introduce some other problems:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If you don’t exercise your options, they could expire before the company exits.&lt;/li&gt;
  &lt;li&gt;If you don’t exercise your options and they go up in value, you might have to pay income tax or &lt;a href=&quot;https://www.investopedia.com/terms/a/alternativeminimumtax.asp&quot;&gt;alternative minimum tax&lt;/a&gt; instead of capital gains tax.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can avoid these problems by exercising your options as soon as they vest, or even &lt;a href=&quot;https://www.investopedia.com/terms/e/earlyexercise.asp&quot;&gt;early exercising&lt;/a&gt; if you can. But even if you do exercise, you might end up paying higher taxes when the startup exits because you’ll get pushed into a higher tax bracket. You can (at least partially&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;) mitigate this by donating the stock instead of selling it.&lt;/p&gt;

&lt;p&gt;(A lot of people can’t afford to exercise their employee stock options. Perhaps an EA org could make grants or loans to help EAs exercise their options. That would be difficult to set up and they’d have to carefully vet grant recipients, but maybe it could work.)&lt;/p&gt;

&lt;p&gt;According to my rough estimate, a 1x liquidation preference reduces the expected value of common shares by about 10%&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;. That equates to around 1–2% per year, depending on how long the company takes to exit. Let’s assume a 3% annual discount due to liquidation preference plus tax disadvantage.&lt;/p&gt;

&lt;p&gt;Many startups pay below-market compensation by claiming that their equity is underpriced. Don’t buy it. The whole point of working at a startup is that your equity will earn (in expectation) above-market returns. If your employer adjusts for this by giving you less equity, that ruins the (monetary) advantage of working at a startup.&lt;/p&gt;

&lt;p&gt;This essay is not about employee equity terms, but it’s an important topic for anyone considering working at a startup. For an in-depth guide, see &lt;a href=&quot;https://github.com/jlevy/og-equity-compensation&quot;&gt;The Open Guide to Equity Compensation&lt;/a&gt; by Joshua Levy et al. For something shorter, I recommend Ben Kuhn’s &lt;a href=&quot;https://www.benkuhn.net/offer/&quot;&gt;checklist for stock option offers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.benkuhn.net/optopt/&quot;&gt;Startup options are better than they look&lt;/a&gt; because employees get “meta-options”: your compensation package gives you the option to vest stock options at the current price for the next four years. If the company does well, you can “exercise” your meta-options by continuing to work there. If it doesn’t do well, you can quit. Neither startup founders nor startup investors can do this.&lt;/p&gt;

&lt;p&gt;I piggybacked on &lt;a href=&quot;https://github.com/benkuhn/option-val&quot;&gt;Ben Kuhn’s meta-option model&lt;/a&gt; (my code &lt;a href=&quot;https://github.com/michaeldickens/option-val/tree/apr&quot;&gt;here&lt;/a&gt;) and found that meta-options are worth an extra 16 percentage points of return (!!). I just did a quick calculation and didn’t perform a sensitivity analysis or anything, so this number could be way off, but let’s go with it for now. If correct, this number is so large that working at a startup looks more profitable than starting a startup, unless you believe you’d make for an unusually good entrepreneur.&lt;/p&gt;

&lt;p&gt;It’s worth mentioning that you could work at a big company that offers equity compensation, which also behaves like a meta-option—albeit a much less valuable one, because big company stock is not as volatile. Using similar methodology, I found that meta-options at a big company are worth 5 percentage points. That means startup meta-options provide an extra 11 percentage points of value (16% – 5%).&lt;/p&gt;

&lt;p&gt;As far as I know, Ben Kuhn invented the concept of meta-options, and no one has ever rigorously analyzed them. My own modification of his program could contain bugs or logical flaws. The value of meta-options &lt;em&gt;could&lt;/em&gt; be large enough to dominate every other factor, or they could be worth nothing. This subject strongly warrants a deeper investigation.&lt;/p&gt;

&lt;p&gt;&lt;div align=&quot;center&quot;&gt;–––––&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;If we can get around the practical concerns, EAs can easily match the returns of top VC firms by getting jobs at their portfolio companies. Can EAs do even better? Can we outperform VCs at picking winning startups?&lt;/p&gt;

&lt;p&gt;Let me say up front that I don’t believe EAs in general can outperform top-quartile VC firms. But when I say I don’t believe it, what I really mean is I assign it less than a 50% probability. So it might still be worth trying.&lt;/p&gt;

&lt;p&gt;(To be more precise, I would give an 80% probability that at least a handful of EAs could pick winning startups better than top VCs, but I don’t know how to identify those people in advance.&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; I’d estimate a 30% chance that, if a group of EAs decide to go work at startups and make a conscious effort to pick winning startups, then they will do better than top-quartile VCs.)&lt;/p&gt;

&lt;p&gt;EAs as a group are really smart. But professional investors are also really smart, and the overwhelming majority of them still fail to beat the market. Maybe EAs are smarter? Maybe EAs are more rational or clear-thinking in some way that most professional investors aren’t? I don’t know.&lt;/p&gt;

&lt;p&gt;It’s possible that EAs could do a better job than VCs of identifying the best startups. On the other hand, EAs might also do a better job of identifying the best &lt;em&gt;public&lt;/em&gt; investments. On the &lt;em&gt;other&lt;/em&gt; other hand, startups are hard to invest in, so it might be easier to find underappreciated opportunities. If EAs have an edge in public markets, they probably have an even bigger edge in startups.&lt;/p&gt;

&lt;p&gt;I don’t have strong evidence on this either way, so I’m leaning on my prior that almost nobody can beat the market. We do have at least &lt;em&gt;some&lt;/em&gt; evidence, but I’m not sure how to interpret it.&lt;/p&gt;

&lt;p&gt;The evidence we do have:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Over the past few years, EA investors &lt;a href=&quot;https://forum.effectivealtruism.org/posts/zA6AnNnYBwuokF8kB/is-effective-altruism-growing-an-update-on-the-stock-of&quot;&gt;have beaten the market&lt;/a&gt;. This is mostly driven by a single company (FTX), so I don’t know how much we can infer from this.&lt;/li&gt;
  &lt;li&gt;One person reviewed a few months’ worth of EAs’ proposed investing ideas and found that they had beaten the market over those few months. (I don’t want to go into specifics because this review was not shared publicly, but that’s the gist of it.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the rest of this essay, I will assume EAs can’t beat top-quartile VCs—not because I am confident that this is true, but because I don’t know how to evaluate the evidence. It could be a good idea to look into this in more depth.&lt;/p&gt;

&lt;h3 id=&quot;forecasting-future-returns&quot;&gt;Forecasting future returns&lt;/h3&gt;

&lt;p&gt;As discussed &lt;a href=&quot;#historical-returns&quot;&gt;above&lt;/a&gt;, startup returns tend to vary a lot over time, so past performance does a poor job of predicting future performance. But we can’t choose between two investments (in this case, public investments vs. startups) unless we believe something about how they will perform. So what should we believe?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Public US equity and bond markets have unusually low return expectations right now, thanks to high valuations/low yields. When stocks and bonds look bad, money flows into alternatives, including VC. Therefore, it’s reasonable to expect VC to have worse future returns as well.&lt;/li&gt;
  &lt;li&gt;For the past few years, VCs have been investing much more money than the historical average (&lt;a href=&quot;https://www.statista.com/statistics/277501/venture-capital-amount-invested-in-the-united-states-since-1995/&quot;&gt;Statista&lt;/a&gt;, 2021). Crowdedness suggests low future returns. The only other time we saw similar crowdedness was in the year 2000, which was the beginning of a major losing streak for VC.&lt;/li&gt;
  &lt;li&gt;Private equity (that is, leveraged buyouts, not VC) has gotten more expensive over the last decade or so (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639647&quot;&gt;Chingono &amp;amp; Rasmussen, 2015&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;), which predicts muted future performance. And Harris et al.&lt;sup id=&quot;fnref:5:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; found that, while top VC firms’ persistence persisted in their whole sample, top private equity firms’ returns stopped persisting after 2001. It’s not clear why private equity’s persistence didn’t persist, but whatever the reason, the same thing might happen to VCs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The outlook for VCs looks worse than usual. The question is, how much worse? Should we expect future returns to be 1 percentage point lower per year? Or 20 percentage points?&lt;/p&gt;

&lt;p&gt;Well, how much worse to US equities and bonds look? We can reliably predict bonds’ long-term returns using the yield. 5-year bonds currently yield around 1%, compared to a 1984–2014 average nominal return of 8% (&lt;a href=&quot;http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/histretSP.html&quot;&gt;Damodaran, 2021&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Stock returns are harder to predict. In the short term, they’re almost impossible to predict, but we can estimate their return over 10+ year periods with reasonable accuracy. Under a more &lt;a href=&quot;https://www.investopedia.com/terms/e/efficientmarkethypothesis.asp&quot;&gt;EMH&lt;/a&gt;-y model that assumes no change in market valuation (e.g., &lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/2021-Capital-Markets-Assumptions-for-Major-Asset-Classes&quot;&gt;AQR, 2021&lt;/a&gt;), forward-looking US equity return expectations look around 5 percentage points worse than they did from 1984 to 2014 (4% vs. 9% after inflation). According to a model that assumes valuations will revert to their historical average (such as &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation#!/?currency=USD&amp;amp;model=ER&amp;amp;scale=LINEAR&amp;amp;terms=REAL&quot;&gt;Research Affiliates, 2021&lt;/a&gt;), returns look 10 percentage points worse (-1% vs. 9%). The truth is probably somewhere in the middle.&lt;/p&gt;

&lt;p&gt;In theory, all asset classes should have the same risk-adjusted return. Startups are riskier than stocks or bonds. So if return expectations for stocks/bonds go down by some amount, expectations for startups should go down by more than that. But we don’t know if this holds in practice, and we don’t even know exactly how risky startups are. If bonds look 7% worse, and stocks look between 5% and 10% worse, then maybe we could assume VC will perform 7% to 10% worse in expectation.&lt;/p&gt;

&lt;p&gt;As for top-quartile VCs: according to Harris et al., over the full historical sample, they outperformed average VCs by a full 11 percentage points. In the post-2000 era, they only outperformed by 4 percentage points, and had a public-market equivalent performance of 1.2 (which means they only weakly outperformed the S&amp;amp;P 500). It seems fair to assume that the future for VCs will look more like 2001–2014 than like 1984–2000, as the pre-2000 VC market was probably less efficient. We could simply assume top VCs will perform 4 percentage points better than average VCs going forward. But it’s also possible that the gap between average and top VCs will continue to narrow.&lt;/p&gt;

&lt;h3 id=&quot;giving-now-vs-later&quot;&gt;Giving now vs. later&lt;/h3&gt;

&lt;p&gt;Working at a startup is comparable to working for a salary and investing it. But if giving now beats giving later, then you wouldn’t want to invest your salary. Instead, you’d want to donate it right away. This makes working at a startup look worse because you can’t donate your equity until it becomes liquid.&lt;/p&gt;

&lt;p&gt;This possibility makes the comparison more difficult, so I will mostly ignore it. It’s not as simple as applying a fixed discount rate to the value of your startup equity. Just be aware that my methodology for comparing startups vs. big companies only works if giving later is at least as good as giving now, at least for the next few years.&lt;/p&gt;

&lt;h1 id=&quot;putting-together-the-expected-return&quot;&gt;Putting together the expected return&lt;/h1&gt;

&lt;p&gt;If we combine all the numbers I came up with in the previous section, we get:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;13% after-fees historical return to VC firms, or 10% after inflation.&lt;/li&gt;
  &lt;li&gt;15% real historical return before fees.&lt;/li&gt;
  &lt;li&gt;Add 4% for the persistence of top-quartile VCs, giving 19%.&lt;/li&gt;
  &lt;li&gt;Add 0% for EAs’ extra outperformance. Still 19%.&lt;/li&gt;
  &lt;li&gt;Subtract 3% for employee equity terms, giving 16%.&lt;/li&gt;
  &lt;li&gt;Add 11% for meta-options, giving 27%.&lt;/li&gt;
  &lt;li&gt;Subtract 9% for the relatively poor market outlook, giving 18%.&lt;/li&gt;
  &lt;li&gt;Ignore giving now vs. later. Still 18%.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thus, I predict a 18% real return for startup employees who try to maximize their earnings (by working at startups with funding from top VCs, getting good equity terms, and exercising their meta-options when necessary).&lt;/p&gt;

&lt;p&gt;How big are the error bars on each of these numbers? In order:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Historical return depends a lot on what time period you look at. &lt;strong&gt;Wide error bars.&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Calculating before-fees return just requires knowing fees, which are usually 2-and-20. Narrow error bars.&lt;/li&gt;
  &lt;li&gt;It wouldn’t be too surprising it top-quartile VCs had as much as 11% extra return or as little as 0%. &lt;strong&gt;Wide error bars.&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Even if we have good reason to expect EAs to do better at picking startups than top-quartile VCs, it seems unlikely that they could perform &lt;em&gt;much&lt;/em&gt; better. Narrow error bars.&lt;/li&gt;
  &lt;li&gt;Liquidation preference matters relatively little. Narrow error bars.&lt;/li&gt;
  &lt;li&gt;The concept of meta-options is complicated and has received hardly any attention. My best-guess estimate for their value is very large, but I could be way off. &lt;strong&gt;Extremely wide error bars.&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Future performance is really hard to predict, even over long time horizons. &lt;strong&gt;Wide error bars.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By my estimate, startup employees’ expected returns could optimistically be as high as 52% (!); or they could be as low as -2%.&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; (Remember, these are &lt;em&gt;expected&lt;/em&gt; returns. &lt;em&gt;Realized&lt;/em&gt; returns could fluctuate by much more than this. Startups in aggregate could easily realize a 100% return next year, and I wouldn’t find that surprising, but I would be crazy to &lt;em&gt;expect&lt;/em&gt; it to happen.)&lt;/p&gt;

&lt;h1 id=&quot;risk-and-correlation-of-startups&quot;&gt;Risk and correlation of startups&lt;/h1&gt;

&lt;p&gt;Expected return alone isn’t what we care about. We really want to know &lt;em&gt;risk-adjusted return&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;And we don’t just care about the risk-adjusted return of startups in isolation. We want to know how they fit into a broader EA investment portfolio.&lt;/p&gt;

&lt;p&gt;There are two equivalent ways of looking at this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Find the risk of startups, and their correlation to the aggregate EA investment portfolio. Then we can calculate whether EAs on the margin should work at startups instead of big companies.&lt;/li&gt;
  &lt;li&gt;Find the &lt;a href=&quot;https://www.investopedia.com/terms/a/alpha.asp&quot;&gt;alpha&lt;/a&gt; of startups relative to the EA portfolio. If alpha &amp;gt; 0, that means at least some EAs should work at startups.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I will focus on the first because I find it more intuitive. I also calculated the second and got similar results (not presented in this essay).&lt;/p&gt;

&lt;p&gt;I’m looking at the risk and correlation of the startup industry, rather than the average risk/correlation of a single startup. We can think of it as a collective decision by many EAs to work at a diversified group of startups, rather than the decision of a single person.&lt;/p&gt;

&lt;p&gt;As with expected return, we have no way to know the future risk of startups, or their correlation to the EA portfolio. But with risk and correlation, we get to make some simplifying assumptions.&lt;/p&gt;

&lt;p&gt;We can’t learn much about the future return of an asset class by looking at its past return. Markets are reasonably efficient, so if an asset class performs well, more money floods in and performance reverts to the mean. But the efficient market hypothesis doesn’t say &lt;em&gt;risk&lt;/em&gt; mean-reverts. Studies show that, at least for public equities, historical volatility is a pretty good predictor of future volatility (e.g., &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3074529&quot;&gt;Dreyer &amp;amp; Hubrich, 2017&lt;/a&gt;&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;Three different data sets all give similar(ish) numbers for startups’ standard deviations:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Data Set (Gross)&lt;/th&gt;
      &lt;th&gt;Standard Deviation&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Cambridge Associates, 1988–2016&lt;/td&gt;
      &lt;td&gt;21.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Cambridge Associates, 1995–2018&lt;/td&gt;
      &lt;td&gt;23.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Harris et al., 1984–2014&lt;/td&gt;
      &lt;td&gt;28.5%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Note that, according to Cambridge Associates, top-quartile VCs have higher standard deviations than average VCs (25% or 28%, depending on which time horizon you use). So if we only work at top startups, we should bump these numbers up by a few percentage points. Also, startups don’t have public prices the same way stocks do, and VCs have some leeway to value their portfolios however you want. I expect that they tilt their portfolios toward low volatility to make themselves look better, so the “true” volatility is probably higher.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://sandhillecon.com/pdf/MeasuringRiskForVentureAndBuyouts.pdf&quot;&gt;Woodward (2009)&lt;/a&gt;&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; argues that, because startup valuations tend to lag the market, a naive regression doesn’t show the true relationship between startups and public equities. The paper finds that startups have a stock market &lt;a href=&quot;https://www.investopedia.com/terms/b/beta.asp&quot;&gt;beta&lt;/a&gt; of a little over 2, which corresponds to a standard deviation of about 35%.&lt;/p&gt;

&lt;p&gt;For correlation, as with standard deviation, we can assume the future looks the same as the past. Historical correlation between startups and public equities was around r=0.7. (My own analysis found a correlation of 0.6, and &lt;a href=&quot;http://sandhillecon.com/pdf/MeasuringRiskForVentureAndBuyouts.pdf&quot;&gt;Woodward (2009)&lt;/a&gt;&lt;sup id=&quot;fnref:13:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; found a correlation of around 0.7–0.8 using more limited data but better methodology. Woodward’s analysis suggests that the naive approach underestimates the true correlation. So let’s use 0.7.)&lt;/p&gt;

&lt;h1 id=&quot;leverage&quot;&gt;Leverage&lt;/h1&gt;

&lt;p&gt;Many EA investors probably want to use &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;leverage&lt;/a&gt;. But startup employees can’t leverage their equity: they get however much they get based on their employment contract, and there’s no way to borrow money to get more equity.&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt; Instead of comparing startup equity to a public investment portfolio, we should compare startup equity to an &lt;em&gt;optimally leveraged&lt;/em&gt; public investment portfolio (taking into account that leverage typically costs more than theoretical models assume).&lt;/p&gt;

&lt;h1 id=&quot;startups-vs-public-equities&quot;&gt;Startups vs. public equities&lt;/h1&gt;

&lt;p&gt;Now that we’ve discussed the main considerations, we can return to the original question: is it better for earners-to-give to work at high-paying companies and invest their salaries in the market, or to work at startups and “invest” in startup equity?&lt;/p&gt;

&lt;p&gt;Some additional assumptions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Our goal is to maximize the &lt;a href=&quot;https://www.effisols.com/basics/rebal.pdf&quot;&gt;geometric return&lt;/a&gt; of the overall EA investment portfolio. (This is consistent with logarithmic utility of money.)&lt;/li&gt;
  &lt;li&gt;We can only invest in two things: public equities or startups.&lt;/li&gt;
  &lt;li&gt;We control 1% of the EA portfolio. We can’t affect the other 99%.&lt;/li&gt;
  &lt;li&gt;EAs currently invest all their money in public equities, and none in startups. (The latter is obviously false, but it’s also sort of true: on the margin, earners-to-give can consider working for startups that don’t already have any EAs working for them. That set of startups has 0% EA investment.)&lt;/li&gt;
  &lt;li&gt;If we buy public investments, we can use up to 2:1 leverage.&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Public equities earn an expected real return of 3% with a standard deviation of 16%.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Recall from above that we’re giving startup equity an 18% expected real return, a 35% standard deviation, and a 0.7 correlation to public equities.&lt;/p&gt;

&lt;p&gt;Our two choices:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Work at a big company. Invest our salary in public equities with 2:1 leverage.&lt;/li&gt;
  &lt;li&gt;Work at a startup.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Given all the stated assumptions, working at a startup is more than four times better than working at a big company (37 expected utility vs. 200 expected utility, according to a scaled logarithmic utility function&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;Suppose we hold everything else constant but reduce the expected real return of startups. The return needs to be as low as 1% before the startup looks like the worse choice. (Notice that that’s lower than the 3% expected return of the public stock market, even before accounting for leverage. Startups with a 2% expected return are still (barely) preferable to public equities with a 3% return because we’re assuming startups have a lower correlation to the EA portfolio.) So even under a much more pessimistic projection for startup returns, they still look preferable to big companies.&lt;/p&gt;

&lt;h1 id=&quot;startups-vs-an-optimized-public-investment-portfolio&quot;&gt;Startups vs. an optimized public investment portfolio&lt;/h1&gt;

&lt;p&gt;Buying an index of public equities might not be the best way to invest one’s big-company salary. I personally prefer to invest in &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/#factor-investing&quot;&gt;concentrated value, momentum, and trend strategies&lt;/a&gt;. Some EAs believe cryptocurrency or AI stocks will beat the market. The specifics don’t matter too much. What matters is that if you believe some other investment has substantially better expected performance than a broad index fund, then you should use that other investment as your benchmark instead. And startups need to look better than that benchmark.&lt;/p&gt;

&lt;p&gt;My best guess is that a concentrated value/momentum/trend portfolio will earn an expected real return of 6% with a standard deviation of 11%. (Of course, as with my estimates for startup returns, these numbers are not remotely robust.) If we also use 2:1 leverage, then value/momentum trend still looks somewhat worse than startups, although not by a as big of a margin (126 expected utility points vs. 200). If startups returned 10% instead of 18%, then value/momentum/trend would be the better choice.&lt;/p&gt;

&lt;h1 id=&quot;alternative-predictionless-approach&quot;&gt;Alternative: Predictionless approach&lt;/h1&gt;

&lt;p&gt;Alternatively, take the same basic model as above, but don’t try to predict the future. Instead, assume asset classes will perform exactly as well in the future as they performed in the past. As I discussed above, this approach has issues—performance fluctuates a lot over time, so past performance doesn’t tell us what will happen in the future. But there’s also something appealing about this method. Trying to predict future performance leaves lots of room for you to bias the outcome toward what you (perhaps subconsciously) want. It’s harder to introduce bias if you just use past performance.&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;For the predictionless approach, I estimated the expected return to employee equity as:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;13% after-fees historical return to VC firms, or 10% after inflation&lt;/li&gt;
  &lt;li&gt;15% real historical return before fees&lt;/li&gt;
  &lt;li&gt;Subtract 2% for employee liquidation preference, giving 13%&lt;/li&gt;
  &lt;li&gt;Add 11% for meta-options, giving 24%&lt;/li&gt;
  &lt;li&gt;Add 4% for the persistence of top-quartile VCs, giving 28%&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For public equities and for my value/momentum/trend portfolio, instead of making projections, I used the (estimated) historical return after inflation from 1995 to 2018:&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Return&lt;/th&gt;
      &lt;th&gt;Std Dev&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Public Equities&lt;/td&gt;
      &lt;td&gt;9%&lt;/td&gt;
      &lt;td&gt;19%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom/Trend&lt;/td&gt;
      &lt;td&gt;15%&lt;/td&gt;
      &lt;td&gt;12%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The three choices have the following expected utilities:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Public equities: 142 utility&lt;/li&gt;
  &lt;li&gt;Val/Mom/Trend: 288 utility&lt;/li&gt;
  &lt;li&gt;Startups: 279 utility&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Startups look preferable to public equities, but slightly worse than value/momentum/trend.&lt;/p&gt;

&lt;h1 id=&quot;alternative-models-are-bad-what-if-we-dont-use-a-model&quot;&gt;Alternative: Models are bad. What if we don’t use a model?&lt;/h1&gt;

&lt;p&gt;I love using quantitative models like the one in this essay. I think more people should use them. But most models are bad, including mine. They depend on lots of assumptions, and you can change the model output by making small changes to the assumptions.&lt;/p&gt;

&lt;p&gt;How could we reason through this decision &lt;em&gt;without&lt;/em&gt; using an explicit model? Let’s review some arguments, both pro- and anti-startup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argument from risk preferences:&lt;/strong&gt; Most startup employees don’t want to donate all their equity. That makes them much more risk-averse than EAs who work at startups. If they’re acting rationally, we should expect them to demand higher equity to compensate for the risk. Therefore, startup equity should look particularly compelling to EAs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argument from inefficiency:&lt;/strong&gt; The market for startups is illiquid and has high barriers to entry. We might reasonably expect it to be less efficient than public markets, which means we have a better chance of identifying startups that will outperform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argument from investability:&lt;/strong&gt; The most reputable VC firms usually don’t accept new investors. Even if they can beat the market, you can’t invest with them, so it doesn’t matter. But there’s nothing stopping you from getting jobs at top VCs’ portfolio companies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argument from overpopularity of startups:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A lot of people want to work at startups because startups are cool, and they’re willing to accept below-market compensation.&lt;/li&gt;
  &lt;li&gt;Total VC investment dollars have &lt;a href=&quot;https://www.toptal.com/finance/venture-capital-consultants/state-of-venture-capital-industry-2019&quot;&gt;increased a lot&lt;/a&gt; over the past few years, even though the number of startups hasn’t changed much. So the startup market might be overinflated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Argument from underappreciated risk:&lt;/strong&gt; In my experience, almost nobody understands how risky individual startups are. Even medium-sized companies are about &lt;a href=&quot;https://mdickens.me/2020/10/18/risk_of_concentrating/&quot;&gt;3x as risky&lt;/a&gt; as the S&amp;amp;P 500. I don’t have sufficiently granular data on startups, but startup-sized public companies are about 5–6x as volatile as the S&amp;amp;P 500, and my guess is startups are even worse. When I see people discussing the value of startup equity, they almost never properly account for this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argument from diversification:&lt;/strong&gt; If you get a job at a startup where no other EAs work, you’re adding an entirely new investment to the EA portfolio. That could be a good thing even if that particular investment has a worse expected value than the market. On the other hand, there are &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/&quot;&gt;other ways of diversifying&lt;/a&gt; that might be better.&lt;/p&gt;

&lt;p&gt;These qualitative arguments don’t obviously lean one way or the other. My intuition from my time working at startups and knowing lots of startup employees is that most people overvalue startups and underestimate risk, which means they probably push down the market rate for equity compensation. But even if most startup employees don’t behave consistently with their personal risk appetite, they still might behave more risk-aversely than EAs ought to.&lt;/p&gt;

&lt;h1 id=&quot;practical-details&quot;&gt;Practical details&lt;/h1&gt;

&lt;p&gt;If more EAs want to work at startups, there are some ways that people or organizations could support this effort, such as:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Maintain a list of startups with funding from top VCs, or startups that look particularly promising for whatever reason.&lt;/li&gt;
  &lt;li&gt;Coordinate to identify companies that don’t already have EAs working at them, or that might provide the most diversification benefits to the EA portfolio.&lt;/li&gt;
  &lt;li&gt;Help EAs review employment contracts from prospective employers.&lt;/li&gt;
  &lt;li&gt;Make loans or grants to EAs to help them exercise stock options as soon as they vest.&lt;/li&gt;
  &lt;li&gt;Career support/recruiting services for EAs who want to work at startups.&lt;/li&gt;
  &lt;li&gt;Support for EAs whose startups fail. Maybe even offer some kind of insurance to reduce risk, e.g., if you go work for a startup and it fails, we will pay you to compensate for the earnings you could have had.&lt;/li&gt;
  &lt;li&gt;Help people negotiate for better equity terms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some of these ideas are logistically difficult, maybe even impossible. I’m not sure the best way to provide support for earners-to-give who choose to work at startups, but it’s something to consider. I believe it would be valuable if an organization existed that helped EAs with these practical details.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;Assuming my model is approximately correct, what type of person might want to work at a startup?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Someone who wants to earn to give.&lt;/li&gt;
  &lt;li&gt;Someone who doesn’t have the right skills or temperament to start a startup, but still might want to work at one.&lt;/li&gt;
  &lt;li&gt;Someone with special insight into a field who thinks they can identify the most promising companies.&lt;/li&gt;
  &lt;li&gt;Perhaps someone who can’t invest with leverage, or who doesn’t want to use leverage, but who is comfortable with the risk of startup equity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Who might not want to work at a startup?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Someone who’s not comfortable with the risk (equity risk or career risk or both).&lt;/li&gt;
  &lt;li&gt;Someone who believes they can see particularly good investment opportunities /outside/ of startups, and wants to earn a high salary so they can invest in those other opportunities.&lt;/li&gt;
  &lt;li&gt;Someone who thinks donating now is significantly better than donating a few years from now, and therefore doesn’t want to wait for startup equity to vest.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My analysis suggests that working at a startup has good expected value &lt;strong&gt;under ideal conditions&lt;/strong&gt;. If you get a job offer from a startup, remember to pay attention to the &lt;a href=&quot;https://www.benkuhn.net/offer/&quot;&gt;specifics of the offer&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Is your total compensation competitive with what you’d get at a big company? (Taking startup equity at face value)&lt;/li&gt;
  &lt;li&gt;Does your equity contract include any sketchy terms?&lt;/li&gt;
  &lt;li&gt;etc. (Too many specifics to list all of them)&lt;/li&gt;
&lt;/ol&gt;

&lt;h1 id=&quot;areas-for-further-research&quot;&gt;Areas for further research&lt;/h1&gt;

&lt;p&gt;Many subjects warrant a deeper investigation:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Historical VC returns&lt;/li&gt;
  &lt;li&gt;Historical returns earned by startup employees&lt;/li&gt;
  &lt;li&gt;How employee equity terms affect the value of equity&lt;/li&gt;
  &lt;li&gt;Value of meta-options&lt;/li&gt;
  &lt;li&gt;How most startup employees decide where to work—most importantly, how sensitive are they to the value of equity?&lt;/li&gt;
  &lt;li&gt;Why EAs might or might not expect to beat the market&lt;/li&gt;
  &lt;li&gt;How Woodward (2009)&lt;sup id=&quot;fnref:13:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;’s analysis looks if you update it to include more recent data&lt;/li&gt;
  &lt;li&gt;Relevance of giving now vs. later&lt;/li&gt;
  &lt;li&gt;Career capital from working at startups. Does working at a startup train you to start a startup or a nonprofit?&lt;/li&gt;
  &lt;li&gt;Other considerations worth including&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;acknowledgements&quot;&gt;Acknowledgements&lt;/h1&gt;

&lt;p&gt;Thanks to Linchuan Zhang for commissioning this research project and providing support. Thanks to Charles Dillon for feedback.&lt;/p&gt;

&lt;h1 id=&quot;changelog&quot;&gt;Changelog&lt;/h1&gt;

&lt;p&gt;2024-04-25: Upgraded confidence from “somewhat likely” to “likely” based on the fact that readers did not spot any major flaws in my reasoning, and based on calculating the value of meta-options using Black-Scholes (see &lt;a href=&quot;https://forum.effectivealtruism.org/posts/kGbHxYhfqttQZx2QD/should-earners-to-give-work-at-startups-instead-of-big?commentId=B2RpHnN5KRMSxwSzM&quot;&gt;here&lt;/a&gt; for explanation) which roughly agreed with Ben Kuhn’s script that I’d used previously.&lt;/p&gt;

&lt;h1 id=&quot;appendix-a-startups-for-founders-and-investors&quot;&gt;Appendix A: Startups for founders and investors&lt;/h1&gt;

&lt;p&gt;This essay has looked at startups from the perspective of employees. How do startups look for other types of people?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Founders:&lt;/strong&gt; Similar to employees in many ways. The upside is you get a lot more equity. (&lt;a href=&quot;https://web.stanford.edu/~rehall/Hall-Woodward%20on%20entrepreneurship.pdf&quot;&gt;Hall &amp;amp; Woodward (2012)&lt;/a&gt;&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt; found that VC-backed startup founders on average made much more money than salaried employees.) The downside is you have to actually start a startup, which is much harder and may require an entirely different skillset.&lt;/p&gt;

&lt;p&gt;Other people have written about whether EAs should start startups:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Applied Divinity Studies, &lt;a href=&quot;https://applieddivinitystudies.com/billionaire/&quot;&gt;Life Advice: Become a Billionaire&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Mathieu Putz, &lt;a href=&quot;https://forum.effectivealtruism.org/posts/m35ZkrW8QFrKfAueT/an-update-in-favor-of-trying-to-make-tens-of-billions-of&quot;&gt;An update in favor of trying to make tens of billions of dollars&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Carl Shulman, &lt;a href=&quot;https://80000hours.org/2012/01/salary-or-startup-how-do-gooders-can-gain-more-from-risky-careers/&quot;&gt;Salary or startup? How do-gooders can gain more from risky careers&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Brian Tomasik, &lt;a href=&quot;https://reducing-suffering.org/calculator-expected-utility-founding-startup/&quot;&gt;Calculator for Expected Utility of Founding a Startup&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VC limited partners:&lt;/strong&gt; If you give your money to a VC firm to invest, this is probably worse than being a startup employee (although it does have the advantage that you don’t need to get a new job). You have to pay VC fees and you don’t get meta-options. You do get a better liquidation preference, but that’s usually not worth as much. For a more detailed discussion on investing in VC, see &lt;a href=&quot;https://gigaom2.files.wordpress.com/2012/05/vc-enemy-is-us-report.pdf&quot;&gt;Mulcahy et al. (2012)&lt;/a&gt;&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Angel investors:&lt;/strong&gt; If you become an angel investor, you don’t have to pay VC fund fees, but you do have to evaluate startups on your own.&lt;/p&gt;

&lt;h1 id=&quot;appendix-b-some-important-tangents&quot;&gt;Appendix B: Some important tangents&lt;/h1&gt;

&lt;p&gt;The points below are all important, but they distract from the thesis of this essay, so I’m not commenting on them in detail.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Within the context of my model, some big companies’ compensation packages behave more like startups’. Companies such as Facebook and Google offer equity to employees. Unlike with startups, you can sell the equity as soon as you get it. But you usually have to wait a year, and a big company’s equity grant still behaves like a meta-option.&lt;/li&gt;
  &lt;li&gt;There are many non-monetary pros and cons to working at a startup. For instance, see &lt;a href=&quot;https://danluu.com/startup-tradeoffs/&quot;&gt;Big companies vs. startups&lt;/a&gt; and &lt;a href=&quot;https://forum.effectivealtruism.org/posts/ejaC35E5qyKEkAWn2/early-career-ea-s-should-consider-joining-fast-growing&quot;&gt;Early career EA’s should consider joining fast-growing startups in emerging technologies&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;An unimportant point that I nonetheless want to address: Startup employees’ equity will get diluted by future fundraising rounds. This doesn’t matter because VCs will get diluted by the same amount, so it doesn’t make employee equity look worse relative to VC equity (unless the employment contract contains sketchy terms around who gets diluted, in which case maybe you shouldn’t work there). Normally, VCs have the option to invest more money in future rounds to negate their dilution, but this also doesn’t matter because it doesn’t change the return on their initial equity purchase.&lt;/li&gt;
&lt;/ol&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The report includes VC returns up to 2020, but it only includes detailed data up to 2018. So when I did my analysis, I used the 1995–2018 data. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Somewhat concerningly, these two data sets show different numbers even for years where they overlap. For example, the 1988–2016 data set quotes a 60.09% return for the year 1996, whereas the 1995–2018 data set claims a 63.46% return for the same year. This discrepancy is at least partially because the 1995–2018 series includes more VC firms, but I haven’t read the Cambridge Associates reports in enough detail to say if that’s the only reason. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Koch (2014). &lt;a href=&quot;http://www.vernimmen.com/ftp/KOCH_S_Research_paper.pdf&quot;&gt;The risk and return of venture capital.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Woodward (2009). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1458050&quot;&gt;Measuring risk for venture capital and private equity portfolios.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Harris, Jenkinson, Kaplan &amp;amp; Stucke (2020). &lt;a href=&quot;https://bfi.uchicago.edu/wp-content/uploads/2020/11/BFI_WP_2020167.pdf&quot;&gt;Has Persistence Persisted in Private Equity? Evidence from Buyout and Venture Capital Funds&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:5:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Figures are copied or inferred from Harris et al. (2020), Table 1, Table 2, and Table 4. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Kaplan &amp;amp; Schoar (2005). &lt;a href=&quot;http://web.mit.edu/aschoar/www/KaplanSchoar2005.pdf&quot;&gt;Private Equity and Performance: Returns, Persistence, and Capital Flows.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At least in the United States, if you donate stock to charity, you can only deduct up to 30% of your income. If you get lucky and make a bunch of money when your startup exits, your startup equity could account for something like 90% of your income. You can only deduct 30%, so you’re stuck paying taxes on the other 60%. &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Very roughly, a startup has a 70% chance to be worth 0x, 20% chance of 0–1x, and 10% chance of &amp;gt;1x. Liquidation preference only matters in the 0–1x case, where common shares are worth about half as much as their face value.&lt;/p&gt;

      &lt;p&gt;If you don’t filter out sketchy terms, the appropriate discount is &lt;a href=&quot;https://www.benkuhn.net/terms/&quot;&gt;more like 36%&lt;/a&gt;. &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Actually, I know a few people who I believe could do a good job of identifying top startups if they took the time to conduct lots of interviews and due diligence. But they’re not going to do that because they’re busy doing other important EA-related activities. &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Chingono &amp;amp; Rasmussen (2015). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2639647&quot;&gt;Leveraged Small Value Equities.&lt;/a&gt; &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For the optimistic estimate, I assumed: top-quartile VCs’ average out-of-sample return of 26% (or 23% real) fully persists; EAs perform 5% better than top VCs; and meta-options are worth 20% (which follows from optimistic assumptions about how meta-options behave).&lt;/p&gt;

      &lt;p&gt;For the pessimistic estimate, I assumed: public equity valuations fully mean revert, and startups perform even worse due to higher risk; top quartile VCs’ returns do not persist at all; and meta-options are worthless (probably because there’s some flaw with them that I haven’t thought of). &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Dreyer &amp;amp; Hubrich (2017). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3074529&quot;&gt;Tail Risk Mitigation with Managed Volatility Strategies.&lt;/a&gt; &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Woodward (2009). &lt;a href=&quot;http://sandhillecon.com/pdf/MeasuringRiskForVentureAndBuyouts.pdf&quot;&gt;Measuring Risk for Venture Capital and Private Equity Portfolios.&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:13:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:13:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, if you get employee stock options, your equity is leveraged. But the amount of leverage approaches zero as the stock price increases. And most stock options are only a little bit leveraged to begin with. For example, if your company stock was last valued at $4 and you get options with a $1 strike price, that’s only 1.33:1 leverage. If the stock price doubles to $8, now you only have 1.14:1 leverage. &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In keeping with my &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#cost-of-leverage&quot;&gt;previous work&lt;/a&gt; on leverage, I assume that borrowers must pay 1% plus the risk-free rate. &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This is the average of the projections by AQR and Research Affiliates as of October 2021. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The utility function takes the geometric mean return as the utility and multiplies by 100,000 to make the numbers more readable. As a baseline, it calculates the expected utility of the EA portfolio without your investment, and then subtracts that from the total expected utility of the EA portfolio including your investment. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Still not impossible, because you could pick the data set or the time series that most closely matches the outcome you want. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For public equities, I used the historical return of the total US stock market, assuming zero fees or trading costs. To find the historical return of Val/Mom/Trend, I created a hypothetical portfolio that invested 80% in Alpha Architect’s Value Momentum Trend Index and 20% in AQR’s Managed Futures Index, which roughly reflects how I actually invest my money. Both indexes subtract estimated fees and trading costs. The historical returns are hypothetical, not actual. I didn’t have data for 2018, so I calculated summary statistics over 1995–2017 instead. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hall &amp;amp; Woodward (2012). &lt;a href=&quot;https://web.stanford.edu/~rehall/Hall-Woodward%20on%20entrepreneurship.pdf&quot;&gt;The Burden of the Nondiversifiable Risk of Entrepreneurship.&lt;/a&gt; &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Mulcahy, Weeks &amp;amp; Bradley (2012). &lt;a href=&quot;https://gigaom2.files.wordpress.com/2012/05/vc-enemy-is-us-report.pdf&quot;&gt;We Have Met the Enemy…and He Is Us: Lessons from Twenty Years of the Kauffman Foundation’s Investments in Venture Capital Funds and the Triumph of Hope over Experience.&lt;/a&gt; &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Obvious Investing Facts</title>
				<pubDate>Fri, 29 Oct 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/10/29/obvious_investing_facts/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/10/29/obvious_investing_facts/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Last updated 2025-05-01.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Many investors, even professionals, are ignorant about obvious facts that they really should know. When I say a fact is “obvious”, what I mean is that you can easily observe it by looking at widely-available data using simple statistical tools.&lt;/p&gt;

&lt;p&gt;A list of obvious but underappreciated facts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 1. A single large-cap stock is about 2x as volatile as the total stock market.&lt;/strong&gt; A small-cap stock is about 4x as volatile. It’s common knowledge that individual stocks are risky, but most people don’t know how to quantify the risk, and I believe they tend to underestimate it. I wrote &lt;a href=&quot;https://mdickens.me/2020/10/18/risk_of_concentrating/&quot;&gt;a whole essay&lt;/a&gt; about this because I think it’s the most important underappreciated investing fact.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;strong&gt;Fact 2. A basket of 50 randomly-chosen stocks isn’t much more volatile than the total stock market.&lt;/strong&gt; This is kind of the opposite of fact 1. According to my backtest,&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; the total US stock market had an annual standard deviation of 17.3%, whereas a randomly-chosen selection of 50 stocks had a standard deviation of 20.2%.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; That’s higher, but not by much.&lt;/p&gt;

&lt;p&gt;Baskets of 30 stocks had standard deviations of 21.7%; 10 stocks had 25.5%; 5 stocks had 31.9%.&lt;/p&gt;

&lt;p&gt;(For more on this, see Elton &amp;amp; Gruber (1977), &lt;a href=&quot;http://pages.stern.nyu.edu/~eelton/papers/77-oct.pdf&quot;&gt;Risk Reduction and Portfolio Size: An Analytic Solution.&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;My anecdote about this: When I was considering opening a donor-advised fund (DAF), I had phone calls with a few investment advisors who I thought might do a good job of managing it. I remember one advisor in particular who I liked at first because she said she was a fan of value and momentum investing. I asked her if she could invest my DAF in a concentrated portfolio of about 60 value stocks, like I describe in &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;this essay&lt;/a&gt;. She declined, saying that a 60 stock portfolio would be far riskier than the broad stock market, and without any extra expected return to compensate. This is obviously false if you simply look at the data. A randomly-chosen collection of 60 stocks is a little riskier than the broad market, but not much riskier. And it’s true that there’s no reason to take on risk if there’s no benefit, but if you’re specifically investing in value and momentum strategies, then historically, concentration risk also brought higher return, as I &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;showed previously&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 3. Stocks can underperform bonds for 30+ years at a time.&lt;/strong&gt; Investment manager Meb Faber &lt;a href=&quot;https://mebfaber.com/2020/02/18/expectations-investing/&quot;&gt;wrote an article about this&lt;/a&gt;, inspired by two amusing Twitter polls:&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;What stretch of underperformance by stocks vs. bonds would you be willing to tolerate before selling your stock allocation?&lt;/p&gt;&amp;mdash; Meb Faber (@MebFaber) &lt;a href=&quot;https://twitter.com/MebFaber/status/1225480365726261249?ref_src=twsrc%5Etfw&quot;&gt;February 6, 2020&lt;/a&gt;&lt;/blockquote&gt;

&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;&lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;Would you be willing to invest in an asset that historically outperforms bonds by a few percentage points per year, but has, multiple times, generated zero outperformance for stretches lasting over 30 years?&lt;/p&gt;&amp;mdash; Meb Faber (@MebFaber) &lt;a href=&quot;https://twitter.com/MebFaber/status/1225557434728620033?ref_src=twsrc%5Etfw&quot;&gt;February 6, 2020&lt;/a&gt;&lt;/blockquote&gt;

&lt;p&gt;53% of investors said they would not tolerate 10 years of stocks underperforming bonds. What did they do from 2000 to 2010, when US stocks performed worse than US bonds? (Bonds have beaten stocks many times, but 2000 to 2010 is such a recent decade that you’d think people would remember it.) Fully 76% of respondents said they wouldn’t tolerate 30 years of underperformance. So either they don’t invest in stocks at all (unlikely), or they don’t know that stocks have in fact underperformed bonds for 30 years on more than one occasion.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.longtermtrends.net/stocks-vs-bonds/&quot;&gt;Longtermtrends.net&lt;/a&gt; has a nice chart of US stock vs. bond performance going back 150 years, using data from &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3805927&quot;&gt;McQuarrie (2021)&lt;/a&gt;,&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; &lt;a href=&quot;https://fred.stlouisfed.org/series/BAMLCC0A0CMTRIV&quot;&gt;FRED&lt;/a&gt;,&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://data.nasdaq.com/data/MULTPL/SP500_REAL_PRICE_MONTH-sp-500-real-price-by-month&quot;&gt;NASDAQ&lt;/a&gt;.&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; (Be sure to look at the “Total Return Stock Index” chart. This chart only shows a maximum of 22 years of underperformance since 1900, but it’s using a corporate bond index and I assume Meb was talking about Treasury bonds.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 4. Bonds are pretty risky after adjusting for inflation.&lt;/strong&gt; US short-term T-bills—supposedly the safest investment around—have experienced a historical drawdown of –49% after inflation. On another fun Twitter poll by Meb Faber, &lt;a href=&quot;https://mebfaber.com/2020/03/05/the-stay-rich-portfolio/&quot;&gt;only 36% of respondents&lt;/a&gt; guessed that the drawdown was worse than –45%. (Meb Faber’s Twitter is a great source of underappreciated investing facts.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 5. The last 3–5 years of performance (of a fund or asset class) tell you almost nothing about future performance.&lt;/strong&gt; For instance, see Hoffstein (2016), &lt;a href=&quot;https://web.archive.org/web/20220707161257/https://blog.thinknewfound.com/2016/04/3-year-track-records-meaningful/&quot;&gt;Are 3-year track records meaningful?&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And yet, &lt;a href=&quot;https://blog.thinknewfound.com/2016/08/finding-7-5-returns/&quot;&gt;89% of managers would replace an investment strategy that has underperformed for 3 years&lt;/a&gt;. And when investment managers switch strategies after a period of poor performance, &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1523736&quot;&gt;the new strategy tends to perform worse than the old one would have&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 6. Buybacks are financially equivalent to dividends.&lt;/strong&gt; Many investors like dividends but dislike buybacks. But these are just two different ways of returning capital to shareholders. In an efficient market, if a company buys back 2% of its shares and then you sell 2% of your holdings, that’s exactly the same as if the company paid a 2% dividend. Conversely, if the company pays you a 2% dividend and you use that money to buy more shares, that’s exactly the same as if the company did a 2% buyback. These scenarios are slightly different for tax purposes, but in financial terms, they’re identical.&lt;/p&gt;

&lt;h2 id=&quot;sort-of-obvious-but-controversial-facts&quot;&gt;Sort-of-obvious but controversial facts&lt;/h2&gt;

&lt;p&gt;These facts in this section are less obviously true, either because they come from theoretical models that don’t necessarily hold in practice, or they rely on mixed empirical data. But I believe they’re still underappreciated, and it’s somewhat surprising just how underappreciated they are among investment professionals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 7. Actively-managed mutual funds are just as risky as index funds.&lt;/strong&gt; I don’t know if this is a common misconception among ordinary investors, but I frequently hear investment professionals claim that actively-managed funds are less risky than index funds, or that they perform better in market drawdowns. The evidence on this is mixed, but generally doesn’t look promising for the actively-managed funds (e.g., see &lt;a href=&quot;https://www.vanguardcanada.ca/documents/7-myths.pdf&quot;&gt;Vanguard (2018)&lt;/a&gt;, specifically Myth #5).&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;I could be wrong about this one, because the evidence isn’t great either way. But the mutual fund managers who confidently claim that they can reduce risk are definitely not standing on solid ground.&lt;/p&gt;

&lt;p&gt;“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” Most people know by now that active mutual funds don’t outperform index funds. So in an attempt to justify their existence, investment managers claim that they provide value by reducing risk. This is (probably) false, but it’s not as well-known that it’s false, so they think they can get away with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 8. If the efficient market hypothesis and &lt;a href=&quot;https://www.investopedia.com/terms/c/capm.asp&quot;&gt;CAPM&lt;/a&gt; are strictly true, then the mean-variance-efficient portfolio is the global market portfolio.&lt;/strong&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; Not the S&amp;amp;P 500. Not 60/40. Not global equities.&lt;/p&gt;

&lt;p&gt;Maybe this fact isn’t exactly obvious—figuring it out required inventing new math. But it was proven over half a century ago, and the man who proved it went on to win a Nobel Prize in economics, in large part &lt;a href=&quot;https://www.nobelprize.org/prizes/economic-sciences/1990/press-release/&quot;&gt;thanks to precisely this result&lt;/a&gt;. You’d think it would be more well-known by now.&lt;/p&gt;

&lt;p&gt;A weaker version of this fact: if you want international exposure, buy a world stock market fund. Don’t buy a US market fund and then claim you’re getting international exposure because some US companies sell internationally. I’ve seen many investing professionals recommend the latter (including &lt;a href=&quot;https://en.wikipedia.org/wiki/John_C._Bogle&quot;&gt;Jack Bogle&lt;/a&gt;). History shows that an international stock index is not perfectly correlated to a US index, so you can trivially get better diversification by buying internationally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 9. In theory, the best way to increase expected return is by using leverage, not by increasing the ratio of stocks to bonds.&lt;/strong&gt; This isn’t an empirical fact, but it easily follows from &lt;a href=&quot;https://www.investopedia.com/terms/c/capm.asp&quot;&gt;CAPM&lt;/a&gt;, the standard theory of investment risk. CAPM makes some assumptions that don’t hold in practice. But under certain real-life conditions, it does indeed make sense to use leverage than rather than overweight risky assets. A few sophisticated investors, such as &lt;a href=&quot;https://en.wikipedia.org/wiki/Bridgewater_Associates&quot;&gt;Bridgewater Associates&lt;/a&gt;, abide by this principle, but the vast majority of investors don’t.&lt;/p&gt;

&lt;p&gt;(Bridgewater pioneered &lt;a href=&quot;https://www.investopedia.com/terms/r/risk-parity.asp&quot;&gt;risk parity&lt;/a&gt; investing, where you allocate to each asset class such that each asset class exposes you to equal risk, and then add leverage. Risk parity is theoretically optimal when every asset class has the same &lt;a href=&quot;https://www.investopedia.com/terms/s/sharperatio.asp&quot;&gt;risk-adjusted return&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 10. Gold is not a great inflation hedge.&lt;/strong&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; Many people claim that gold can hedge inflation, and there are reasonable theoretical reasons to expect it to. But gold is very volatile, and it frequently performs poorly even in times of high inflation. It might still make a good addition to a portfolio, but it doesn’t provide the straightforward inflation protection that a lot of people believe it does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 11. Art has not outperformed the stock market.&lt;/strong&gt; Art sellers like to claim that art is a great investment (e.g., &lt;a href=&quot;https://robbreport.com/shelter/art-collectibles/investing-art-stock-2874901/&quot;&gt;1&lt;/a&gt;, &lt;a href=&quot;https://www.forbes.com/sites/bardenprisant/2020/03/27/the-art-market-is-beating-the-stock-market/&quot;&gt;2&lt;/a&gt;), and they cite studies showing the returns to classic art. These results are &lt;a href=&quot;https://www.gsb.stanford.edu/insights/research-art-good-investment&quot;&gt;due to selection bias&lt;/a&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;. You can only measure an artwork’s change in value if it gets sold once and then sold again later. And art only tends to get re-sold when its value goes up, so your sample excludes most of the artwork that depreciated over time.&lt;/p&gt;

&lt;h2 id=&quot;obvious-facts-that-are-becoming-more-well-known&quot;&gt;Obvious facts that are becoming more well-known&lt;/h2&gt;

&lt;p&gt;20 or 30 years ago, most people didn’t know these facts. but fortunately, they’ve become more widely understood in recent years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 12. High fees are really bad.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fact 13. Actively-managed funds generally can’t outperform index funds.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many people know these facts, and the market for low-fee index funds is growing rapidly. Even so, about 2/3 of investment dollars still reside in expensive actively-managed mutual funds, so we have a long way to go.&lt;/p&gt;

&lt;h2 id=&quot;some-more-facts&quot;&gt;Some more facts&lt;/h2&gt;

&lt;p&gt;AQR has written three papers about investing facts and fictions. Most of AQR’s facts are obvious in the sense that you can easily verify them using empirical data. They’re somewhat less obvious than the facts above in that they’re about more specific types of investments, rather than the general behavior of markets.&lt;/p&gt;

&lt;p&gt;Their three papers (PDF warning):&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://images.aqr.com/-/media/AQR/Documents/Journal-Articles/JPM-Fact-Fiction-and-Momentum-Investing.pdf&quot;&gt;Fact, Fiction and Momentum Investing&lt;/a&gt; (2014)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://images.aqr.com/-/media/AQR/Documents/Journal-Articles/JPM-Fact-Fiction-and-Value-Investing.pdf&quot;&gt;Fact, Fiction and Value Investing&lt;/a&gt; (2015)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.aqr.com/-/media/AQR/Documents/Whitepapers/Fact-Fiction-and-the-Size-Effect.pdf&quot;&gt;Fact, Fiction and the Size Effect&lt;/a&gt; (2018)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;why-do-so-many-investment-professionals-not-know-these-obvious-facts&quot;&gt;Why do so many investment professionals not know these obvious facts?&lt;/h2&gt;

&lt;p&gt;I’m not sure. I can see two common themes:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Investment professionals believe things because they can make more money by believing them, not because they’re true.&lt;/li&gt;
  &lt;li&gt;Investment professionals don’t look at empirical data, so regardless of how obvious an empirical fact might be, they will never notice it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;One possible explanation is that when people think they know something, they don’t feel the need to look into it. For example, if you “know” that bonds are safe after adjusting for inflation, why would you look up their historical performance? Even so, that would mean financial professionals don’t make a habit of looking at simple data that’s relevant to their work, which seems bad. Or, like, you’d think this information would propagate. Somebody tweets about an obvious but little-known fact; their followers realize how wrong they were, and tell their colleagues about it; those colleagues tell their other colleagues; and pretty soon, everybody knows. (I must admit that I didn’t know bonds had experienced a –49% drawdown until I read Meb Faber’s tweet, so I’m also guilty of not looking at obvious data sometimes.)&lt;/p&gt;

&lt;p&gt;(I do think professional credentials provide a positive signal, at least. In my experience, &lt;a href=&quot;https://en.wikipedia.org/wiki/Chartered_Financial_Analyst&quot;&gt;CFAs&lt;/a&gt; are more likely to know these obvious facts.)&lt;/p&gt;

&lt;p&gt;Maybe experts in most fields don’t know obvious facts. I don’t know much about most fields, so I can’t say. The two areas that I know a lot about are computer science and investing (and I know more about computer science because I studied it in college). In my experience, there aren’t a lot of obvious computer science misconceptions among programmers/computer scientists (unless I also believe all the common misconceptions).&lt;/p&gt;

&lt;p&gt;Maybe investment advisors aren’t the right reference class for “experts”. Maybe the experts are the people publishing academic papers on investing.&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; I’d guess that academics in finance know obvious facts more often than investment advisors do.&lt;/p&gt;

&lt;p&gt;Sometimes, people don’t know important facts, but they behave correctly anyway. I frequently hear people touting advice such as, “pick an investment allocation and stick with it no matter what.” You might come up with this advice if you knew that stocks can underperform bonds for 30 years at a time, and that 3–5 years of historical performance tells you almost nothing about future performance. And many people follow this advice even without knowing those facts. If they end up doing the right thing, it doesn’t really matter if they know why they’re doing it. On the other hand, almost everyone behaves incorrectly in other ways—&lt;a href=&quot;https://mdickens.me/2017/03/26/do_investors_put_too_much_stock_in_the_us/&quot;&gt;overweighting their home country stock&lt;/a&gt;, or &lt;a href=&quot;https://mdickens.me/2020/10/18/risk_of_concentrating/&quot;&gt;holding on to concentrated investments for too long&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;(This reminds me of &lt;a href=&quot;https://slatestarcodex.com/2013/12/17/statistical-literacy-among-doctors-now-lower-than-chance/&quot;&gt;Statistical Literacy Among Doctors Now Lower Than Chance&lt;/a&gt;: doctors perform depressingly poorly on the Bayes mammogram problem, but when they get a positive test result in real life, they still do the correct thing—rather than jumping to conclusions, they order more tests.)&lt;/p&gt;

&lt;h1 id=&quot;changelog&quot;&gt;Changelog&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;2022-11-18: Add a data source for the fact “Stocks can underperform bonds for 30+ years at a time.”&lt;/li&gt;
  &lt;li&gt;2025-05-01: I realized the facts were numbered incorrectly so I updated the numbers.&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;My analysis took about two hours, which is long enough that someone wouldn’t do it on a whim, but short enough that I still think it’s fair to describe the result as “obvious”. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;My methodology:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Each year from 1973 to 2013, choose 50 stocks uniformly at random.&lt;/li&gt;
        &lt;li&gt;Simulate a portfolio that buys the chosen stocks, weighted by market cap.&lt;/li&gt;
        &lt;li&gt;Calculate the standard deviation over the portfolio’s annual total return.&lt;/li&gt;
        &lt;li&gt;Repeat 100 times and take the arithmetic mean of all the standard deviations.&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;The standard error was 0.17 percentage points. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;McQuarrie, Edward (2021). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3805927&quot;&gt;Stocks for the Long Run? Sometimes Yes. Sometimes No.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Federal Reserve Bank of St. Louis (2022). &lt;a href=&quot;https://fred.stlouisfed.org/series/BAMLCC0A0CMTRIV&quot;&gt;ICE BofA US Corporate Index Total Return Index Value.&lt;/a&gt; &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;NASDAQ (2022). &lt;a href=&quot;https://data.nasdaq.com/data/MULTPL/SP500_REAL_PRICE_MONTH-sp-500-real-price-by-month&quot;&gt;S&amp;amp;P Real Price by Month.&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Stewart, Neumann, Knittel &amp;amp; Heisler (2009). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1523736&quot;&gt;Absence of Value: An Analysis of Investment Allocation Decisions by Institutional Plan Sponsors.&lt;/a&gt; &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ptak (2018), &lt;a href=&quot;https://www.morningstar.com/articles/852864/will-active-stock-funds-save-your-bacon-in-a-downturn&quot;&gt;Will Active Stock Funds Save Your Bacon in a Downturn?&lt;/a&gt; found that actively-managed funds did tend to outperform in drawdowns, but not by enough (in the author’s opinion) to make up for their overall bad performance.&lt;/p&gt;

      &lt;p&gt;What I’d really like to see is how much &lt;a href=&quot;https://www.investopedia.com/terms/a/alpha.asp&quot;&gt;alpha&lt;/a&gt; mutual funds have in up or down markets, but I haven’t found any research on that.&lt;/p&gt;

      &lt;p&gt;See also &lt;a href=&quot;https://www.onedayinjuly.com/active-vs-passive-in-down-markets&quot;&gt;financial advisor Josh Kruk’s take&lt;/a&gt; on the matter. He did his own analysis and found that active funds did not outperform in down markets. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Sharpe (1964). &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/full/10.1111/j.1540-6261.1964.tb02865.x&quot;&gt;Capital Asset Prices: A Theory of Market Equilibrium Under Conditions of Risk.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Erb, Harvey &amp;amp; Viskanta (2020). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3667789&quot;&gt;Gold, the Golden Constant, COVID-19, ‘Massive Passives’ and Déjà Vu.&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Korteweg, Kräussl &amp;amp; Verwijmeren (2013). &lt;a href=&quot;https://www.gsb.stanford.edu/faculty-research/working-papers/does-it-pay-invest-art-selection-corrected-returns-perspective&quot;&gt;Does It Pay to Invest in Art? A Selection-Corrected Returns Perspective.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Disclosure: The linked article did not convince me that classic art returns are due to selection bias. Rather, I was confident &lt;em&gt;a priori&lt;/em&gt; that the apparent high returns came from selection bias, and this was the first empirical result I found that supported my prior hypothesis. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Successful day traders and quants are experts in some sense, but their work doesn’t actually have much in common with the way most people invest. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Future Funding/Talent/Capacity Constraints Matter, Too</title>
				<pubDate>Mon, 18 Oct 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/10/18/future_talent_funding_constraints/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/10/18/future_talent_funding_constraints/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Last updated 2021-10-20.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;People who talk about talent/funding/capacity constraints mostly talk about what’s the biggest constraint &lt;em&gt;right now&lt;/em&gt;. But it also matters what the constraints will be later.&lt;/p&gt;

&lt;p&gt;Right now, the EA community holds a lot of wealth—more wealth than it can productively spend in the next few years, at least on smaller cause areas such as AI safety, cause prioritization research, and wild animal welfare. Those newer fields need time to scale up so they can absorb more funding.&lt;/p&gt;

&lt;p&gt;That doesn’t mean EAs should stop earning to give. Maybe most EAs could do more good &lt;em&gt;this year&lt;/em&gt; with their direct efforts than with their donations. But perhaps 10 years from now, the smaller causes will have scaled up a lot, and they’ll be able to deploy much more money. Earners-to-give can invest their money for a while, and then deploy it once top causes develop enough spending capacity.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;Even if top cause areas don’t currently have the capacity to spend more money, earning-to-give still looks valuable if three conditions all hold:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The capacity:funding ratio will greatly increase in the future.&lt;/li&gt;
  &lt;li&gt;The money spent in high-capacity organizations will still produce a lot of utility per dollar.&lt;/li&gt;
  &lt;li&gt;The discount rate isn’t too much higher than the investment rate of return.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In general, I believe it should be easier to grow capacity than to grow funding. By comparison, it’s not uncommon for startups to double the number of employees year over year. Charitable organizations usually can’t grow like startups, but they can at least get within the same ballpark.&lt;/p&gt;

&lt;p&gt;Over the past few years, &lt;a href=&quot;https://forum.effectivealtruism.org/posts/zA6AnNnYBwuokF8kB/is-effective-altruism-growing-an-update-on-the-stock-of&quot;&gt;funding has grown faster than capacity&lt;/a&gt;, not the other way around. So I could be wrong.&lt;/p&gt;

&lt;p&gt;Once capacity grows, will orgs still be able to do a lot of good per dollar? They will use up the best giving opportunities first, so by the time capacity expands, the remaining opportunities won’t be as good. How much difference does that make? It depends on how much it costs to solve the world’s most important problems. If it cost a lot, we will still need lots of money. If it doesn’t cost much (relatively speaking), then future dollars will be worth much less than dollars today.&lt;/p&gt;

&lt;p&gt;Is the discount rate high enough that we should care substantially less about future funding? I don’t think so. (What we actually care about is the discount rate minus the investment rate, because the investment rate determines how much money we will have in the future if we invest it now.) The altruistic discount rate minus the investment rate might be around 5% on the high end (it’s probably less than that, and it could be negative). I would be surprised if we couldn’t grow capacity &lt;em&gt;much&lt;/em&gt; more quickly than 5% per year. So this factor is unlikely to be decisive.&lt;/p&gt;

&lt;p&gt;My personal predictions on the three conditions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The capacity:funding ratio will indeed greatly increase in the future. &lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Likely.&lt;/li&gt;
  &lt;li&gt;I don’t know how quickly the value of money will decrease over time. My best guess is that all current EA funds combined are not enough to solve the world’s biggest problems, so we will need as much additional funding as we can get. Confidence: Possible.&lt;/li&gt;
  &lt;li&gt;The discount rate is low enough that it doesn’t affect the outcome of this question. Confidence: Highly likely.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ultimately, this argument doesn’t tell us much about whether any particular person should go into direct work or earning to give. And even if earning to give looks valuable, it might still be true that, in general, more people should be doing direct work.&lt;/p&gt;

&lt;h2 id=&quot;appendix-some-related-but-distinct-arguments&quot;&gt;Appendix: Some related but distinct arguments&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The first argument:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;Altruists have a low discount rate.&lt;/li&gt;
    &lt;li&gt;[some other premises]&lt;/li&gt;
    &lt;li&gt;Therefore, they should give later rather than now.&lt;/li&gt;
    &lt;li&gt;Therefore, earning to give is good.&lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;(Phil Trammell presents an argument like this &lt;a href=&quot;https://forum.effectivealtruism.org/posts/amdReARfSvgf5PpKK/phil-trammell-philanthropy-timing-and-the-hinge-of-history&quot;&gt;here&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;This is a related argument in the sense that it considers the value of marginal dollars now vs. later. But the argument I presented does not require altruists to have a low discount rate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The second argument:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;There is an optimal percentage of resources for the community to spend vs. invest in a given year.&lt;/li&gt;
    &lt;li&gt;The community may end up spending above this level in the future.&lt;/li&gt;
    &lt;li&gt;If that happens, having people switch to earning to save may be one of the best ways to deal with it.&lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;(source: Ben Todd, &lt;a href=&quot;https://forum.effectivealtruism.org/posts/J5aYvsiLoAC46DSuY/an-argument-for-keeping-open-the-option-of-earning-to-save&quot;&gt;An argument for keeping open the option of earning to save&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Like my argument, this argument also considers present vs. future constraints. But it’s talking about the tradeoff between spending and investing for people who are &lt;em&gt;already&lt;/em&gt; earning to give, not about the choice to earn money vs. do direct work.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Low-Hanging (Monetary) Fruit for Wealthy EAs</title>
				<pubDate>Sat, 16 Oct 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/10/16/low_hanging_fruit_wealthy_eas/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/10/16/low_hanging_fruit_wealthy_eas/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;&lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Likely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/2McPz3d3gr7gSoFFQ/low-hanging-monetary-fruit-for-wealthy-eas&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Ordinary wealthy people don’t care as much about getting more money because they already have a lot of it. So we should expect to be able to find overlooked methods for rich people to get richer.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; Wealthy effective altruists might value their billionth dollar nearly as much as their first dollar, so they should seek out these overlooked methods.&lt;/p&gt;

&lt;p&gt;If someone got rich doing X (where X = starting a startup, excelling at a high-paying profession, etc.), their best way of making money on the margin might not be to do more X. It might be to do something entirely different.&lt;/p&gt;

&lt;p&gt;Some examples:&lt;/p&gt;

&lt;p&gt;(Edit 2024-03-18: This paragraph did not age well…although the point about retaining equity is still valid.)&lt;/p&gt;

&lt;p&gt;Sam Bankman-Fried increased his net worth by $10,000,000,000 in four years by founding FTX. He earned most of those zeroes by doing the hard work of starting a company, and there’s no shortcut around that. But, importantly, he managed to retain most of his original stake in FTX. For most founders, by the time their company is worth $10 billion or more, they only own maybe 10% of it. If Sam had given away a normal amount of equity to VCs, he might have only gotten $2 billion from FTX instead of $10 billion. In some sense, 80% of the money he earned from FTX came purely from retaining equity.&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;(Bill Gates might be another person who got richer than usual by retaining ownership.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;Warren Buffett is an investing genius. But if he had &lt;em&gt;only&lt;/em&gt; been an investing genius, he would have made a lot less money. In addition to earning high investment returns, he borrowed money at below-market interest rates using &lt;a href=&quot;https://www.shortform.com/blog/insurance-float-warren-buffett/&quot;&gt;insurance float&lt;/a&gt;. That let him compound his returns much faster than he otherwise would have.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Long-Term_Capital_Management&quot;&gt;Long-Term Capital Management&lt;/a&gt; was an extremely successful hedge fund (at least until it blew up, but let’s not talk about that part). Much of its success didn’t come from its trading strategies, but from the fact that it negotiated unusually favorable deals with its banking partners.&lt;/p&gt;

&lt;p&gt;A quasi-example: Instead of trying to make more money, try to (legally) pay less tax. This idea is not exactly overlooked, so it’s not a central example of what I’m talking about. But I &lt;em&gt;have&lt;/em&gt; noticed that most “normal-level” rich people (with between $1 million and $10 million) don’t pay nearly enough attention to taxes. I know a lot of people who could save $10,000 to $100,000 per year with 20 hours of work or less.&lt;/p&gt;

&lt;p&gt;In these examples, it’s not necessarily that the person (or company) came up with a trick that nobody else had thought of. Obviously every startup founder would love to retain as much of their stock as they can. But they usually have to choose between giving up equity and running out of funding, and they rightly choose to keep their company alive. My point isn’t that it’s easy. My point is that, &lt;em&gt;on the margin&lt;/em&gt;, EA founders should probably pay more attention to retaining equity. Maybe a founder could do some extra work and find a way to keep 11% of their equity instead of 10%, without meaningfully reducing their startup’s chances of success. Getting that extra 1 percentage point of equity might be a lot easier than raising the company’s ultimate valuation by 10%.&lt;/p&gt;

&lt;p&gt;Wealthy EAs care more about marginal dollars than ordinary wealthy people, and they can take advantage of that fact. They can look for (relatively) easy ways of getting money. I only spent about five minutes coming up with the examples I gave, so I’m sure there are plenty of other ideas.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For people with about average wealth, the best way to get more money is usually the obvious thing: get a higher salary. But there are still some neglected strategies. One that comes to mind is salary negotiation. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The mathy way to say this is that, on a logarithmic scale, most of his wealth came from doing the hard work. But on a linear scale, it mostly came from retaining equity. If most people have logarithmic utility of money but you have linear utility, that changes how you should behave. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Elon Musk did something sort of similar, where he got the Tesla board of directors to grant him highly leveraged call options. When Tesla stock went up, Elon Musk’s net worth went up by even more. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Do I Read My Own Citations?</title>
				<pubDate>Fri, 08 Oct 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/10/08/do_i_read_citations/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/10/08/do_i_read_citations/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;It has often been said that scholars don’t read their own citations. Out of curiosity, I decided to go through &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/&quot;&gt;one of my longer essays&lt;/a&gt; to see how many of my citations I read.&lt;/p&gt;

&lt;p&gt;(I actually did this exercise a while ago, around the time I published the original essay. Today I was going through my personal journal and found my notes on the exercise, and I thought it might be worth sharing publicly.)&lt;/p&gt;

&lt;p&gt;The results:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;My essay cites a total of 37 academic papers.&lt;/li&gt;
  &lt;li&gt;For 9 citations, I read the entire thing top to bottom and took notes to help me remember.&lt;/li&gt;
  &lt;li&gt;For another 6 citations, I skimmed them but didn’t read carefully or take notes.&lt;/li&gt;
  &lt;li&gt;For the remaining 22, I only read the abstracts.&lt;/li&gt;
&lt;/ul&gt;

&lt;!-- more --&gt;

&lt;p&gt;I also replicated the results of 7 of the 37 citations. I didn’t do full replications, but I did reproduce at least one result from each paper. 6 out of those 7 were what you might call “trusting” replications: I got data from the study authors and used that to replicate their findings, but I’m still trusting that they constructed their data set correctly. For one of the 7 replications, I used an independent data set.&lt;/p&gt;

&lt;p&gt;(There’s another level of reading that’s less rigorous than a replication but more rigorous than a full read-through: a “close reading” with the intentional goal of identifying mathematical or logical errors. I did not do that for any of my citations. Replicating a paper is (arguably) harder than doing a close reading, but I preferred to do outright replications because I find it more fun.)&lt;/p&gt;

&lt;p&gt;Why did I approach the citations this way? Here’s why:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A minority of papers were directly relevant to my thesis, so I read those in full. Whether I read or skimmed a paper largely depended on how well it was written. I can barely force myself to read papers with bad prose, so I just skim those.&lt;/li&gt;
  &lt;li&gt;For most citations, I just wanted to know some specific fact, and I searched the literature until I found one that included the fact I wanted. For example: I wondered, “do mutual funds have any net exposure to the value factor?” I found a couple of papers that purportedly answered this question—&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2912287&quot;&gt;Blitz (2017)&lt;/a&gt; and &lt;a href=&quot;https://www.nber.org/system/files/working_papers/w25381/w25381.pdf&quot;&gt;Lettau et al. (2018)&lt;/a&gt;—skimmed them to find the relevant tables, and then cited them, trusting that they were methodologically sound.&lt;/li&gt;
  &lt;li&gt;Another common cause of me not reading citations is that I’d read an interesting fact in an article or book, decide to put the fact in my essay, and then cite the same paper that that article or book cited. (Pretty much the classic “don’t cite Wikipedia, just read Wikipedia and then copy its citations” technique.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a heuristic, if multiple studies find similar results, that gives me more confidence in their conclusions without having to read them. It’s not perfect, but it’s a reasonable way to save time. For example, I haven’t read &lt;a href=&quot;http://www.business.unr.edu/faculty/liuc/files/badm742/fama_french_1992.pdf&quot;&gt;Fama &amp;amp; French (1992)&lt;/a&gt; beyond the abstract, but I know it’s been replicated lots of times.&lt;/p&gt;

&lt;p&gt;Sure, in an ideal world, I should carefully analyze every paper I cite to check for mistakes. But given time constraints, it makes sense for me not to read most of the papers I cite. Still, I can see why bad papers often get widely cited—nobody reads them! I wish I had more time and energy to read more stuff. (I also wish most academic papers weren’t written so poorly. If they were better written, reading them might actually be fun.)&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Summaries Are Important</title>
				<pubDate>Mon, 04 Oct 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/10/04/summaries_are_important/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/10/04/summaries_are_important/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Every informative essay or research paper should include a summary at the beginning. Write your summary with the expectation that most readers will ONLY read the summary. The summary should tell most readers everything they need to know. The body of the article only exists to provide context and supporting evidence.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;(That was my summary. You can probably stop reading now.)&lt;/p&gt;

&lt;p&gt;The summary of your article should describe your central conclusions, which is ultimately the point of your article. A summary is high-context/high-trust, while the body of an article is low-context/low-trust. A summary is high-context because you write it for readers who have enough context to understand what you’re talking about. It’s high-trust because it’s for readers who trust that you know what you’re talking about, and who you trust not to deliberately misinterpret you. Then you write the body of the article to give additional context, and to justify your conclusions to skeptical or inquisitive readers.&lt;/p&gt;

&lt;p&gt;There are basically two types of articles that should have a summary:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;In a research paper, the goal is to test some hypothesis. (That hypothesis doesn’t have to be empirical, e.g., your paper could prove a mathematical theorem.) Most readers only care about what the hypothesis is, and whether it’s true or false.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; So put that in your summary.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;In a persuasive essay, the central argument is usually pretty simple. Put the central argument in the summary. This might be be sufficient to persuade open-minded readers who have enough background to understand the argument. Then, in the body of the essay, explain the argument and provide supporting evidence.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If most readers only read the summary, that means you should spend much more time on the summary to make sure it conveys exactly the message you want. When I write an essay, I usually spend 5–10 times longer per word writing the summary than writing any other part.&lt;/p&gt;

&lt;p&gt;Academic papers almost always include abstracts. An abstract is like a summary. Unfortunately, many (most?) papers don’t have good abstracts. Some common mistakes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The abstract uses overly convoluted language, making it hard to understand. (An abstract/summary is high-context, but that doesn’t mean it has to be hard to understand.)&lt;/li&gt;
  &lt;li&gt;The abstract describes what the paper does, but doesn’t describe what conclusions it reaches.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;examples-of-bad-and-good-summaries&quot;&gt;Examples of bad and good summaries&lt;/h2&gt;

&lt;p&gt;I don’t want to pick on anyone, so I’ll write my own bad summary.&lt;/p&gt;

&lt;p&gt;Luke Muehlhauser’s &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting&quot;&gt;How Feasible Is Long-Range Forecasting?&lt;/a&gt; has an excellent summary, which I’ll talk about in a minute. But first, here’s my own (bad) summary of this article, written in the style of a typical academic paper’s abstract:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Long-term projects or endeavors depend on predictions of distant future events under low information and high uncertainty. This paper investigates historical evidence on long-term track records. There are significant concerns regarding historical forecasts with respect to quantification of degrees of confidence, null hypotheses, and forecaster behavior. Some difficulties are discussed for assessing the accuracy and determining the value of long-term forecasts. In conclusion, this organization’s current attitude toward long-range forecasting is provided.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This summary is bad for several reasons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The sentences are long and hard to read.&lt;/li&gt;
  &lt;li&gt;It doesn’t actually explain the findings of the paper. “Some difficulties are discussed”? What difficulties, specifically? Don’t make me read the entire article to learn the article’s results.&lt;/li&gt;
  &lt;li&gt;It tries too hard to sound objective, which contorts the language. “[T]his organization’s current attitude toward long-range forecasting is provided”? Why not, “We provide our current attitude toward long-range forecasting”? Or, even better, instead of saying that you’ll provide your attitude, simply &lt;em&gt;provide your attitude&lt;/em&gt;: something like, “We don’t know how to make good long-term forecasts, but we plan to continue making them so we can get better at it.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let’s read Muehlhauser’s &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting&quot;&gt;actual summary&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;Long-range forecasts are often stated too imprecisely to be judged for accuracy. &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting#Imprecisely_stated_forecasts&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;Even if a forecast is stated precisely, it might be difficult to find the information needed to check the forecast for accuracy. &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting#Practically_uncheckable_forecasts&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;Degrees of confidence for long-range forecasts are rarely quantified. &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting#Non-quantified_degrees_of_confidence&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;In most cases, no comparison to a “baseline method” or “null model” is possible, which makes it difficult to assess how easy or difficult the original forecasts were. &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting#No_comparison_to_a_baseline_method_or_null_model_is_feasible&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;Incentives for forecaster accuracy are usually unclear or weak. &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting#Unclear_or_weak_incentives_for_accuracy&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;Very few studies have been designed so as to allow confident inference about which factors contributed to forecasting accuracy. &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting#Weak_strategy_for_causal_identification&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
    &lt;li&gt;It’s difficult to know how comparable past forecasting exercises are to the forecasting we do for grantmaking purposes, e.g. because the forecasts we make are of a different type, and because the forecasting training and methods we use are different. &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting#Unclear_relevance_to_our_own_long-range_forecasting&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;Take the first sentence of this summary:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;Long-range forecasts are often stated too imprecisely to be judged for accuracy.&lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;This sentence forms a complete thought and tells me something that I might not have known. I don’t have any way of knowing whether this statement is true, so the article needs to justify it. But if I trust that the author knows what he’s doing, I could stop reading at this point without losing much.&lt;/p&gt;

&lt;p&gt;For most readers, those seven conclusions are all they need to know. Every point on this list is easy to understand and tells you something interesting. There’s no need to read the rest of the article. But if you want to know the details, you can follow the links or continue reading to see how the author came to those conclusions.&lt;/p&gt;

&lt;h2 id=&quot;how-to-write-a-good-summary&quot;&gt;How to write a good summary&lt;/h2&gt;

&lt;p&gt;I don’t really know how to write a good summary. I know one when I see one, but writing is harder than reading.&lt;/p&gt;

&lt;p&gt;Some strategies I use:&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Write a summary that matches the overall structure of the article. If your article includes three parts, write one sentence for each part.&lt;/li&gt;
  &lt;li&gt;If your summary covers several independent points, write them as a bulleted list, not a big paragraph. Bulleted lists are easier to read (in my experience).&lt;/li&gt;
  &lt;li&gt;Make each sentence as short as possible without losing any critical information.&lt;/li&gt;
  &lt;li&gt;If you must choose between precision and clarity, choose clarity. People often end up writing overly convoluted sentences because they’re trying to make their statements impossible to misinterpret. That’s okay sometimes, but don’t overdo it, especially not in a summary. Keep the summary clear, even if that means readers have to read between the lines a little.&lt;/li&gt;
  &lt;li&gt;Rewrite each sentence multiple times. Pick whichever version is clearest.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A lot of the advice for writing a good summary is the same as the advice for writing well in general. But even if you don’t meticulously comb through your entire article to make it as clear as possible, you &lt;em&gt;should&lt;/em&gt; spend a lot of time time on your summary.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Obviously, a paper on an empirical subject cannot definitively prove it true or false. Rather, the paper provide evidence pointing to “true” or to “false”, and the reader wants to know which way the evidence points. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I did not take all of my own advice in this essay. This essay is short, and these strategies are more relevant to long articles. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Argumentative Tactics I Would Like to Stop Seeing</title>
				<pubDate>Mon, 04 Oct 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/10/04/bad_argumentative_tactics/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/10/04/bad_argumentative_tactics/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;I realize I’m fighting a losing battle on this one and most people don’t care if they behave uncharitably toward their opponents. People are always gonna mock people they disagree with. But I don’t like it.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h3 id=&quot;argument-from-my-opponent-made-a-silly-face&quot;&gt;Argument from my opponent made a silly face&lt;/h3&gt;

&lt;p&gt;News articles about a particular person, especially when the writer wants to make that person look bad, will often include a cover photo of that person making a silly face.&lt;/p&gt;

&lt;p&gt;Have you ever paused a movie during a dialogue scene and noticed that the speaker appeared to be making a silly face? It’s easy to take a video of a person speaking and pick out still frames where they look goofy. News articles love doing this because they can make someone look bad while still claiming to be objective.&lt;/p&gt;

&lt;p&gt;Here are some examples of silly faces that I found in news articles with various political leanings:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/silly-Bezos.jpg&quot; /&gt;
&lt;img src=&quot;/assets/images/silly-Biden.jpg&quot; /&gt;
&lt;img src=&quot;/assets/images/silly-Trump.png&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;argument-from-my-opponent-sounds-silly&quot;&gt;Argument from my opponent sounds silly&lt;/h3&gt;

&lt;p&gt;People sometimes quote their opponents as mispronouncing certain popular phrases, and that’s their entire argument.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“freeze peach”&lt;/li&gt;
  &lt;li&gt;“muh X” for various values of X (“muh jobs”, “muh freedom”)&lt;/li&gt;
  &lt;li&gt;“well, aktually”&lt;/li&gt;
  &lt;li&gt;“dead gaem” (I see this one a lot on gaming forums)&lt;/li&gt;
  &lt;li&gt;“they tuk er jerbs!” (I had no problem with South Park &lt;a href=&quot;https://www.imdb.com/title/tt0761287/&quot;&gt;using this phrase&lt;/a&gt; because it was a joke. I do have a problem with people repeating it as if it’s an actual argument)&lt;/li&gt;
  &lt;li&gt;aNy TiMe SoMeOnE tYpEs LiKe ThIs&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;argument-from-my-opponent-has-unstylish-facial-hair&quot;&gt;Argument from my opponent has unstylish facial hair&lt;/h3&gt;

&lt;p&gt;It is widely recognized that calling someone ugly is not an acceptable argumentative tactic. But for some reason, people make an exception for the term “neckbeard”.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>My Experience Trying to Force Myself to Do Deep Work</title>
				<pubDate>Mon, 23 Aug 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/08/23/my_deep_work_experience/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/08/23/my_deep_work_experience/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Inspired by Applied Divinity Studies’ &lt;a href=&quot;https://applieddivinitystudies.com/2020/09/03/quitting-II/&quot;&gt;Unemployment Part 2&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Many people, such as &lt;a href=&quot;https://www.calnewport.com/books/deep-work/&quot;&gt;Cal Newport&lt;/a&gt;, say that you can only do about four hours of deep work per day. I am a lot worse at deep work than that.&lt;/p&gt;

&lt;p&gt;When I worked full-time as a software developer, I tried pretty hard to avoid distractions and stay focused on work. At the end of each day, I made a quick estimate of how much I got done that day. I rated myself on a 5-point productivity scale. A fully productive day, where I spent the bulk of the day doing meaningful work, earned the full 5 points. My estimates were by no means objective, but according to my own perception, I scored 5 points on a total of 91 out of 602 work days (that’s 15%). A 5-point day usually meant I spent around four hours doing deep work, and most of the rest of the day doing important shallow work.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;When I failed to do sufficient deep work, it usually wasn’t because I spent too much time on meetings and emails (although that did happen occasionally). Most of the time, I was simply too dumb to get enough useful work done.&lt;/p&gt;

&lt;p&gt;Most of my work required having insights. If I’m trying to debug a problem, figure out how to extend some bit of code, or plan out a new feature, I need to have a series of small ideas about how to make progress. For example, if I’m looking for the source of a bug, I might have the idea, “The problem might be in this particular function. I’ll step through the function and look for any anomalies.” On my less productive days, I would simply fail to generate any such ideas, preventing me from making any progress.&lt;/p&gt;

&lt;p&gt;On my worst days (which happened about 15% of the time), I couldn’t manage even simple tasks. This was more about focus than about insight. It would usually go like this:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Me: Here’s the task that we need to do. As you can see, I’ve opened the relevant file and I’m currently looking at it. We need to do X to this file.
Brain: I don’t want to do that.
Me: Come on, you know exactly what to do, you should just do it.
Brain: No. Can we do something fun instead?
Me: We have to do this.
Brain: Well I’m not gonna.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since I refused to get distracted and my brain refused to do the actual work, I’d end up staring at the screen doing nothing for about an hour. Eventually I’d give up on that and go read Hacker News or think about investing—things that feel work-adjacent but don’t qualify as genuine work.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; After doing that for about 15 to 30 minutes, I’d feel guilty enough to go back to staring at my code while doing nothing.&lt;/p&gt;

&lt;h2 id=&quot;what-predicts-productivity&quot;&gt;What predicts productivity?&lt;/h2&gt;

&lt;p&gt;I tracked a bunch of variables to see if anything could predict whether I’d have a good or bad day. I looked at sleep, how much I ate, the glycemic index of my lunch&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, whether I went to the gym, and whether I had melatonin the previous night; none showed a statistically significant effect. The only thing that clearly mattered was whether I had caffeine. On a five-point scale, caffeine boosted by average productivity by about 0.7 points.&lt;/p&gt;

&lt;p&gt;(I’m highly confident that I perform worse when I don’t get enough sleep. But my sleep schedule was consistent enough that I didn’t have a sufficiently large sample of sleep-deprived work days to show a visible effect.)&lt;/p&gt;

&lt;p&gt;(I didn’t blind myself, so any positive results could be explained by a placebo effect. But I only got one positive result, and I wasn’t expecting to get it before I analyzed the data—subjectively, I felt about the same on caffeine days and non-caffeine days.)&lt;/p&gt;

&lt;p&gt;About a year ago, when I started working full-time on independent research, the frequency of fully-productive days went up from 15% to 24% (53 out of 221). But median productivity declined, probably because I stopped trying to force myself to work when I didn’t want to.&lt;/p&gt;

&lt;p&gt;My productivity tended to come in waves. I’d think of some interesting and digestible sub-problem, make a bunch of progress on it for a few days or weeks (depending on the size of the problem), figure out all the obvious stuff, and then lose steam and stop being productive for a while.&lt;/p&gt;

&lt;p&gt;During waves of low productivity, the issue wasn’t that I didn’t know what to work on. I had plenty of ideas about important research areas. The problem was that I didn’t feel motivated to work on them. Usually, I’d only feel motivated to work on a problem shortly after I came up with it. Once the problem became stale, I’d lose interest. But sometimes I’d get a sudden second wind on an old stale problem. It would be nice if I knew what caused those second winds, or if I could find a way to reliably trigger them. But so far I haven’t figured out how to do that.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some of the time spent on investing genuinely improved my life, and maybe even helped improve other people’s lives. But I also spent a lot of time doing obviously-useless things like checking the current value of my portfolio, or changing my hypothetical allocation to a particular strategy from 5% to 10% and then changing it back later.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I didn’t have any precise way of measuring this. I just kept a list of common foods and their glycemic indexes, and then estimated the average glycemic index of my meal based on how much of each food I ate. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I say hypothetical because I have a rule that I only change my actual allocation if I consistently believe I should change it for a while—maybe a month or longer (depending on the scope of the change). I want to avoid what you might call “tinkering bias”—the desire to tinker with my allocation because tinkering is fun, not because I’m actually improving it. So I rarely change my actual investments. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Mission Hedgers Want to Hedge Quantity, Not Price</title>
				<pubDate>Wed, 18 Aug 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/08/18/mission_hedging_quantity_price/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/08/18/mission_hedging_quantity_price/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; Mission hedgers want to hedge the quantity of a good, but can only directly hedge the price. As a motivating example, can we mission hedge climate change using oil futures or oil company stock? Based on a cursory empirical analysis, it appears that we can, and that oil stock makes for the better hedge. But this answer relies on some questionable data (either that, or my methodology is bad).&lt;/p&gt;

&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;

&lt;p&gt;The purpose of &lt;a href=&quot;https://forum.effectivealtruism.org/posts/iZp7TtZdFyW8eT5dA/a-generalized-strategy-of-mission-hedging-investing-in-evil&quot;&gt;mission hedging&lt;/a&gt; is to earn more money in worlds where your money matters more. For instance, if you’re working to prevent climate change, you could buy stock in oil companies. In worlds where oil companies are more successful and climate change is worse, you make more money—at least in theory. But it might be hard in practice to find a good way to hedge climate change.&lt;/p&gt;

&lt;p&gt;Let’s start with a simple example. Suppose you’re considering hedging climate change by buying oil &lt;a href=&quot;https://www.investopedia.com/terms/c/commodityfuturescontract.asp&quot;&gt;futures&lt;/a&gt;. Does that work?&lt;/p&gt;

&lt;p&gt;If people burn more oil, that directly contributes to climate change. You’d like to make money as the &lt;strong&gt;quantity&lt;/strong&gt; of oil goes up. If you buy oil futures, you will make money as the &lt;strong&gt;price&lt;/strong&gt; of oil goes up. The problem is, the quantity and price of oil aren’t necessarily related.&lt;/p&gt;

&lt;p&gt;Before we talk about the economics of oil prices, let’s clarify some terminology.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;supply-and-demand&quot;&gt;Supply and demand&lt;/h2&gt;

&lt;p&gt;The basic drivers of supply and demand:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Consumers are willing to buy more of some product if the price is lower.&lt;/li&gt;
  &lt;li&gt;Producers are willing to sell more of the product if the price is higher.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In aggregate, consumers’ willingness to buy the product creates a downward-sloping demand curve, which indicates the quantity consumers will buy at any given price—or, conversely, the price they will pay for any given quantity. Similarly, producers’ willingness to sell gives us an upward-sloping supply curve. These two curves cross over at some point, and that point tells us the actual market price and quantity of a product.&lt;/p&gt;

&lt;p&gt;A poorly-drawn illustration of what the supply and demand curves look like:&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/supply-and-demand.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Demand for oil might increase if, say, people start owning more cars. If demand increases, both the price and quantity of the good go up:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/demand-increase.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Supply increases when produces are willing to sell more. For example, supply of oil could increase if oil companies discover a new source of oil. If supply increases, quantity goes up and price goes down:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/supply-increase.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;And, naturally, supply and demand can move in the other direction:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If demand decreases, both quantity and price go down.&lt;/li&gt;
  &lt;li&gt;If supply decreases, quantity goes down while price goes up.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;back-to-mission-hedging&quot;&gt;Back to mission hedging&lt;/h2&gt;

&lt;p&gt;So, can we mission hedge by buying oil futures?&lt;/p&gt;

&lt;p&gt;If we own oil futures, we make money when the price goes up. But climate change gets worse when the &lt;em&gt;quantity&lt;/em&gt; goes up. And there’s no obvious relationship between price and quantity. If demand changes, price and quantity move together. But if supply changes, price and quantity move oppositely.&lt;/p&gt;

&lt;p&gt;If oil prices are mostly driven by changes in demand, we can mission hedge by buying oil futures. If prices are mostly driven by supply, we can hedge by &lt;a href=&quot;https://www.investopedia.com/terms/s/shortselling.asp&quot;&gt;short-selling&lt;/a&gt; oil. If supply and demand matter about equally, price and quantity will be uncorrelated and we can’t use oil futures as a hedge.&lt;/p&gt;

&lt;p&gt;What’s the relationship between oil price and quantity in practice? I did about 10 minutes of research and found some data from the U.S. Energy Information Administration (EIA)’s &lt;a href=&quot;https://www.eia.gov/finance/markets/crudeoil/&quot;&gt;Short-Term Energy Outlook&lt;/a&gt;. Based on this data, it appears that price and quantity are moderately correlated (r=0.67, p&amp;lt;1e-16)&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;. If this is correct, we can at least moderately hedge climate change by buying oil futures.&lt;/p&gt;

&lt;p&gt;(I’m not at all confident that this 0.67 figure is accurate. I found a different, seemingly-reasonable data set that shows a negative correlation—see &lt;a href=&quot;#methodological-concerns&quot;&gt;Methodological concerns&lt;/a&gt;.)&lt;/p&gt;

&lt;h2 id=&quot;hedging-with-oil-stocks&quot;&gt;Hedging with oil stocks&lt;/h2&gt;

&lt;p&gt;Can we do better by buying/short-selling stock in oil companies instead?&lt;/p&gt;

&lt;p&gt;We want to make money when the quantity of oil goes up. It takes four steps to get from quantity to stock return:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The revenue of an oil company roughly equals [quantity of oil sold] x [price of oil].&lt;/li&gt;
  &lt;li&gt;Profit equals revenue multiplied by profit margin.&lt;/li&gt;
  &lt;li&gt;Companies can return value to shareholders by paying out dividends and buybacks.&lt;/li&gt;
  &lt;li&gt;The price of a stock (theoretically) equals the long-run value of all profit that will be returned to shareholders. Stock return equals dividends plus buybacks plus price change due to changes in future expectations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each step in this chain [quantity -&amp;gt; revenue -&amp;gt; profit -&amp;gt; dividends -&amp;gt; stock return] leaves some wiggle room. It’s not obvious that any step is correlated with the next step, much less that the first step correlates with the last.&lt;/p&gt;

&lt;p&gt;In a perfectly competitive market, increases in revenue should incentivize competition, driving down profit margins such that profits don’t change at all. So we should see no correlation between oil quantity and stock return. But markets are not perfectly competitive, so that argument might not apply in practice.&lt;/p&gt;

&lt;p&gt;We could answer this empirically by looking at the correlation between oil stock returns and the quantity of oil consumed.&lt;/p&gt;

&lt;p&gt;I took the Energy Information Administration data from before and combined it with historical returns for the Vanguard Energy Fund (&lt;a href=&quot;https://investor.vanguard.com/mutual-funds/profile/VGENX&quot;&gt;VGENX&lt;/a&gt;). They had a correlation of 0.85 (p&amp;lt;1e-36)—much stronger than the relationship between the oil quantity and oil futures prices. (A correlation of 0.85 means changes in the quantity of oil consumption explain 72% of the variation in VGENX’s price (0.85^2 = 0.72). So we might say we can hedge 72% of climate change by buying oil stock.)&lt;/p&gt;

&lt;p&gt;How much of this comes from a correlation between oil consumption and stocks in general? It happens that oil consumption correlates with the S&amp;amp;P 500&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; at 0.83 (p&amp;lt;1e-33). Oil stocks barely do better, with r=0.85. However, a combined portfolio of 50% S&amp;amp;P 500, 50% VGENX correlates with oil consumption at r=0.96 (p&amp;lt;1e-68). So according to this, VGENX does provide some hedging power beyond what the S&amp;amp;P 500 gives.&lt;/p&gt;

&lt;p&gt;But as with the data on oil futures, we have some reason to question this empirical result.&lt;/p&gt;

&lt;h2 id=&quot;methodological-concerns&quot;&gt;Methodological concerns&lt;/h2&gt;

&lt;p&gt;In an earlier draft of this essay, I didn’t realize EIA had data on world consumption, so I used the closest proxy I could find—&lt;a href=&quot;https://www.eia.gov/outlooks/steo/data/browser/#/?v=29&quot;&gt;non-OPEC oil production&lt;/a&gt;—combined with their data on WTI real prices (GDP deflated). I found a correlation of –0.21 (p &amp;lt; 0.07). Later, I found EIA’s data on world oil consumption and used that instead.&lt;/p&gt;

&lt;p&gt;I would expect these two pairs of datasets to produce slightly different correlations, but I’m surprised by just &lt;em&gt;how&lt;/em&gt; different they look. Why does one pair show a meaningful positive correlation, and the other shows a weak negative correlation? The first suggests mission hedging works pretty well, and the second suggests it barely works at all. Which one is correct?&lt;/p&gt;

&lt;p&gt;Where does the difference come from? EIA-reported real oil prices correlate with investing.com-reported oil futures prices at r=0.95 (p&amp;lt;1e-42). World oil consumption correlates with non-OPEC oil production at r=0.15 (p&amp;lt;0.17). That’s weirdly low—according to EIA, non-OPEC countries produce 60% of the world’s oil, so their production should strongly relate to world consumption. This seems to suggest that non-OPEC oil production is not a good metric to look at. But also, even though EIA oil prices look very similar to oil futures prices, and oil futures prices moderately correlate with world oil consumption (r=0.67, p&amp;lt;1e-16), EIA oil prices only weakly correlate with oil consumption (r=0.11, p&amp;lt;0.33). Part of this happens due to a difference in time horizons: the VGENX, world consumption, and futures data go back to 1990, but EIA oil prices and non-OPEC production only goes to 2001. If I only include data starting in 2001, futures prices only correlate with world consumption at r=0.26 (p&amp;lt;0.02). This seems weird both in that it’s still a lot stronger than the EIA price &amp;lt;&amp;gt; world consumption correlation, and it’s much worse than the full time series (I’d expect shortening the time series to increase the p-value, but not to change the correlation much).&lt;/p&gt;

&lt;p&gt;World oil consumption seems more relevant than non-OPEC production. But slide 7 on an &lt;a href=&quot;https://www.eia.gov/finance/markets/crudeoil/reports_presentations/crude.pdf&quot;&gt;EIA slideshow&lt;/a&gt; claims “price increases have coincided with lower consumption”, which suggests the correlation should be negative, not positive. That makes me suspicious of the data on world oil consumption, or suspicious of my methodology.&lt;/p&gt;

&lt;p&gt;It’s possible that there’s something wrong with my methodology. On the one hand, I don’t have much experience with data analysis, so I could have made a mistake. On the other hand, all I’m doing is correlating two lists of numbers, how hard could that be?&lt;/p&gt;

&lt;p&gt;In conclusion, I’m confused.&lt;/p&gt;

&lt;h2 id=&quot;questions-for-future-research&quot;&gt;Questions for future research&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Why do these different data sources produce seemingly contradictory results?&lt;/li&gt;
  &lt;li&gt;What is the portfolio with the strongest correlation to oil consumption?&lt;/li&gt;
  &lt;li&gt;We actually want to hedge global temperature, which isn’t perfectly related to oil consumption. Is there a better hedge we could use?&lt;/li&gt;
  &lt;li&gt;How can we apply this same approach to other areas worth hedging, such as factory farming or AI progress?&lt;/li&gt;
&lt;/ol&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In high school AP Economics, we had to draw graphs like this about three thousand times. I’m leaving out a few details to make the graph simpler—if I drew this on the AP test, I’d probably fail the test. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I arrived at this number by downloading EIA’s quarterly data on total world consumption of oil and other liquid fuels and investing.com’s data on WTI oil futures prices, and then calculating the correlation. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Instead of looking at the S&amp;amp;P 500, which only includes the largest US stocks, I’d rather compare oil consumption to the global stock market. But I couldn’t quickly find quarterly global stock returns going back to the beginning of 1990. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>How Do AI Timelines Affect Giving Now vs. Later?</title>
				<pubDate>Mon, 02 Aug 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/08/02/ai_timelines_now_vs_later/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/08/02/ai_timelines_now_vs_later/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/bxvzu7qBF4cAsSu6d/how-do-ai-timelines-affect-giving-now-vs-later&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;How do AI timelines affect the urgency of working on AI safety?&lt;/p&gt;

&lt;p&gt;It seems plausible that, if artificial general intelligence (AGI) will arrive soon, then we need to spend quickly on AI safety research. And if AGI is still a way off, we can spend more slowly. Are these positions justified? If we have a bunch of capital and we’re deciding how quickly to spend it, do we care about AI timelines? Intuitively, it seems like the answer is yes. But is it possible to support this intuition with a mathematical model?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TLDR:&lt;/strong&gt; Yes. Under plausible model assumptions, there is a direct relationship between AI timelines and how quickly we should spend on AI safety research.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;Let’s start with some simplifying assumptions. We want to distill reality into a simple framework while still capturing the key features of the world that we care about. We could do this in lots of different ways, but here’s an attempt:&lt;/p&gt;

&lt;p&gt;We start with some fixed amount of capital that we can choose how to spend. At some point in the future, an artificial general intelligence (AGI) will be developed. This AGI will either be friendly or unfriendly. If it’s unfriendly, everyone dies. We don’t know exactly when AI will be developed, but we have at least an expected timeline.&lt;/p&gt;

&lt;p&gt;To ensure the AGI is friendly, we will need to do some amount of AI safety research, but we don’t know exactly how much. Once per decade, we decide how much to spend on safety research. Any money we don’t spend can be invested in the market. Then, after AGI emerges, if it’s friendly, we can spend any leftover capital on whatever amazing things will presumably exist at that point.&lt;/p&gt;

&lt;p&gt;(That’s just a high-level explanation; I’m skipping over the mathy bits. &lt;a href=&quot;#appendix-a-some-properties-of-this-model&quot;&gt;Appendix A&lt;/a&gt; contains the full details.)&lt;/p&gt;

&lt;p&gt;If we do enough research by the time AGI is developed, everything works out okay. If we don’t do enough research, we go extinct. The objective is to choose the spending schedule that maximizes the welfare we get out of our remaining capital after AGI emerges. (If we go extinct, we get no welfare. To meet our objective, we need to spend money on preventing unfriendly AI.)&lt;/p&gt;

&lt;p&gt;Philanthropists face this basic tradeoff:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If we spend more now, we’re more likely to get enough research done in time if AGI arrives soon.&lt;/li&gt;
  &lt;li&gt;If we spend more later, we earn more return on our investments. That way, (a) we can do a greater total amount of research, and (b) we will have more money left over at the end to spend on good things.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we run the math on this model, this is what it says to do:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If AGI is very unlikely to emerge this decade, don’t spend any money on research yet. Invest all our capital.&lt;/li&gt;
  &lt;li&gt;Once we get close to the median estimated date of AGI (to within a few decades), start spending around 30% of our capital per decade / 3% per year.&lt;/li&gt;
  &lt;li&gt;In the decades after the median date of AGI (assuming AGI hasn’t emerged yet), reduce the spending rate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model’s optimal spending rate varies based on the median date of AGI:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;AI Impacts’ &lt;a href=&quot;https://aiimpacts.org/ai-timeline-surveys/&quot;&gt;review of AI timeline surveys&lt;/a&gt; found that survey respondents estimated a 50% chance of AGI by around 2050. Given that timeline, the model recommends a peak spending rate of 3% per year.&lt;/li&gt;
  &lt;li&gt;For a much longer median timeline of 100 years, the model suggests spending nothing for the first 50 years, then spending around 1% per year after that.&lt;/li&gt;
  &lt;li&gt;If we assume a very short timeline of only one decade, the model says to spend 5% per year for the first decade, and 1–2% per year after that if AGI still hasn’t appeared.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Obviously this is a toy model that makes lots of unrealistic simplifications. For instance, you can’t instantly cause more research to happen by throwing more money at it. But the model corroborates the intuitive notion that if AI timelines are shorter, then we should spend more quickly.&lt;/p&gt;

&lt;p&gt;I have a hard time trusting this intuition on its own. The question of how much to spend now vs. later is really complicated: it’s affected by the exponential growth of investments, the decay in expected value of future worlds where extinction is a possibility, and the complex relationship between research spending and productivity. Humans don’t have good intuitions around that sort of thing. A lot of times, when you do the math, you realize that your seemingly-reasonable intuition was totally off base. So even though this model has many limitations, it confirms that the intuition is &lt;em&gt;not&lt;/em&gt; a mistake arising from a failure to comprehend exponential growth. The intuition could still be wrong, but if so, it’s not because of a basic math error&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;It’s also noteworthy that under this model, even with an aggressive AGI timeline, the optimal spending rate doesn’t exceed 5% per year.&lt;/p&gt;

&lt;p&gt;So, do short timelines mean we should spend more quickly? Yes. Maybe. If this model is correct. Which it’s not. But even if it’s wrong, it might still be correct in the ways that matter.&lt;/p&gt;

&lt;p&gt;Python source code is available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/ai_safety_now_later.py&quot;&gt;here&lt;/a&gt;. It’s hard to properly describe the model’s output in words, so you might find it more illustrative to download the source code and play with it.&lt;/p&gt;

&lt;h2 id=&quot;appendix-a-some-properties-of-this-model&quot;&gt;Appendix A: Some properties of this model&lt;/h2&gt;

&lt;p&gt;This model embeds all of the assumptions listed below, any of which could easily be wrong. This list does not cover every assumption, just the explicit ones plus all the implicit ones I could think of in five minutes.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;We represent all donors supporting AI safety research. We can collectively decide on the optimal spending rate.&lt;/li&gt;
  &lt;li&gt;We decide how much to spend once per decade. (This makes the problem tractable. If we could spend on a yearly basis, the model would have too many independent variables for Python’s optimization library to handle.)&lt;/li&gt;
  &lt;li&gt;We only care about spending decisions for the next two centuries. Ignore anything that happens after that. (Again, this is to make the problem computationally tractable.)&lt;/li&gt;
  &lt;li&gt;Prior to the emergence of AGI, we don’t want to spend money on anything other than AI safety research.&lt;/li&gt;
  &lt;li&gt;After AGI is developed, we get an amount of utility equal to the logarithm of our remaining capital.&lt;/li&gt;
  &lt;li&gt;It’s possible to instantly convert money into research at any scale.&lt;/li&gt;
  &lt;li&gt;The date of AGI follows a log-normal distribution. A log-normal distribution has some relevant properties:
    &lt;ol&gt;
      &lt;li&gt;It’s fat-tailed, which means the longer we go without developing AGI, the more additional time we expect it to take.&lt;/li&gt;
      &lt;li&gt;Unlike, say, an &lt;a href=&quot;https://en.wikipedia.org/wiki/Exponential_distribution&quot;&gt;exponential distribution&lt;/a&gt;, a log-normal distribution allows for a non-trivial probability that our median estimate is off by an order of magnitude. If our median timeline is 30 years, then we might still think it’s plausible that AGI could take 300 years. (Exactly how plausible depends on what standard deviation we use.)&lt;/li&gt;
      &lt;li&gt;On the other hand, unlike, say, a &lt;a href=&quot;https://en.wikipedia.org/wiki/Pareto_distribution&quot;&gt;Pareto distribution&lt;/a&gt;, our probability quickly diminishes as we move out by more orders of magnitude. For example, if we estimate there’s a 50% chance of AGI within 30 years and a 95% chance within 300 years, that implies an extremely confident &lt;a href=&quot;https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule#Table_of_numerical_values&quot;&gt;99.995% chance&lt;/a&gt; of AGI by the year 5000.&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;Research spending required to avert catastrophe follows a log-normal distribution, so it also has the properties listed above.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This model has six input variables:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;how much capital we start with&lt;/li&gt;
  &lt;li&gt;the investment rate of return&lt;/li&gt;
  &lt;li&gt;median research spending required to make AGI safe&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;standard deviation of research spending required&lt;/li&gt;
  &lt;li&gt;median date of AGI&lt;/li&gt;
  &lt;li&gt;standard deviation of date of AGI&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;appendix-b-alternative-models&quot;&gt;Appendix B: Alternative models&lt;/h2&gt;

&lt;p&gt;Tom Sittler’s &lt;a href=&quot;https://thomas-sittler.github.io/ltf-paper/longtermfuture.pdf&quot;&gt;“The expected value of the long-term future”&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; presents some models that treat x-risk as an ongoing concern that can be reduced by deliberate effort (I will refer to these as “periodic models”) rather than a single one-off event that occurs at an unknown time. I find his models more realistic, plus they’re more similar to the &lt;a href=&quot;https://plato.stanford.edu/entries/ramsey-economics/&quot;&gt;Ramsey model&lt;/a&gt; that comes up a lot in economics literature.&lt;/p&gt;

&lt;p&gt;About a year prior to writing this essay, I spent quite some time working with periodic models like the ones Sittler gives. The problem with them is that they’re much harder to solve. I couldn’t find optimal solutions for any of them, and I couldn’t even find approximate solutions with a convex optimization program.&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;As a way around this, I tried &lt;a href=&quot;https://mdickens.me/2020/08/28/x_risk_now_or_later/&quot;&gt;restricting the decision&lt;/a&gt; to only two points in time: now or one century from now. This allowed me to preserve the basic structure these models while making it possible to find an optimal solution. But this restriction is highly limiting, which means the models’ optimal solutions tell us little about what we should do in reality.&lt;/p&gt;

&lt;p&gt;The new model I presented above has some nice properties. To my knowledge, no previous model achieved all of these:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Civilization has a nonzero probability of going extinct in any particular year, but the probability of survival does not quickly approach zero.&lt;/li&gt;
  &lt;li&gt;We must decide how much of our remaining budget to spend in each period. We cannot reduce our decision to a binary “fund x-risk or don’t”.&lt;/li&gt;
  &lt;li&gt;The optimal spending schedule is feasible to find.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Of the four models that Sittler presented:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“Trivial model” (section 3.1.2) has property 3, but not 1 or 2;&lt;/li&gt;
  &lt;li&gt;“Constant risk, temporary effects” (section 3.1) has property 2, but not 1 or 3;&lt;/li&gt;
  &lt;li&gt;“Variable risk, temporary effects” (section 3.2) and “Constant risk, lasting effects” (section 3.3) have properties 1 and 2, but not 3.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The models in &lt;a href=&quot;https://mdickens.me/2020/08/28/x_risk_now_or_later/&quot;&gt;my previous essay&lt;/a&gt; are similar to Sittler’s. They gain solvability (property 3) at the expense of periodic decisionmaking (property 2).&lt;/p&gt;

&lt;p&gt;However, the model in this essay does make some fairly specific assumptions, as discussed in Appendix A. Perhaps the most important assumptions are (a) there is only a single potential extinction event and (b) the long-term value of the future is bounded.&lt;/p&gt;

&lt;p&gt;In an earlier draft of this essay, my model did not assign value to any capital left over after AGI emerges. It simply tried to minimize the probability of extinction. This older model came to the same basic conclusion—namely, shorter timelines mean we should spend faster. (The difference was that it spent a much larger percentage of the budget each decade, and under some conditions it would spend 100% of the budget at a certain point.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;) But I was concerned that the older model trivialized the question by assuming we could not spend our money on anything but AI safety research—obviously if that’s the only thing we can spend money on, then we should spend lots of money on it. The new model allows for spending money on other things but still reaches the same qualitative conclusion, which is a stronger result.&lt;/p&gt;

&lt;p&gt;None of these models is perfect, but some are more realistic than others. Which one is more realistic largely depends on an important question: what happens after we develop AGI? For example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Will the AGI behave like a better version of a human, allowing us to do all the stuff we would have done anyway, but at a faster rate? Or will it be so radically better as to make the world unrecognizable?&lt;/li&gt;
  &lt;li&gt;Will the AGI be able to prevent all future x-risks, or will we still need to worry about the possibility of extinction?&lt;/li&gt;
  &lt;li&gt;Does it matter how much capital we have? If we invest more now, that might give the AGI a useful headstart. But the AGI might so radically change the economy that the state of the economy prior to AGI won’t matter, or altruistic capital might become (relatively) useless.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The answers to these questions could meaningfully change how much we should be spending on AI safety (or on other forms of x-risk mitigation).&lt;/p&gt;

&lt;p&gt;It’s at least plausible that the world economy allocates far too little to x-risk, therefore thoughtful altruists should spend their entire budgets on x-risk reduction. But the same could be argued for other effective and neglected causes such as farm animal welfare, so you have to decide how to prioritize between neglected causes. And that doesn’t get around the problem of determining the optimal spending schedule: even if you should spend your entire budget on x-risk, it doesn’t follow that you should spend the whole budget &lt;em&gt;now&lt;/em&gt;.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Unless, of course, my model contains the same math error. Which is entirely possible. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We could parameterize the distribution using the mean rather than the median, but I find medians a little more intuitive when working with log-normal distributions. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Published 2018-01-02. Accessed 2021-08-02. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Last year I spent a good 200 hours trying to figure out how to model this problem. Then, after not working on it for a year, I suddenly get an idea and write up a working program in an hour. Funny how that works. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;On the old model, the only downside to spending now rather than later was that you lose out on investment returns, so you can spend less total money. When investments could earn a relatively low return and timelines were short, the model would propose spending a little each decade and then spending the entire remaining budget at a specific point, usually on or shortly before the decade of peak risk. When investments could earn a high return or timelines were long, the model would never spend the whole budget at once, preferring to always save some for later. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Metaculus Questions Suggest Money Will Do More Good in the Future</title>
				<pubDate>Wed, 21 Jul 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/07/21/metaculus_learning_value/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/07/21/metaculus_learning_value/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/dBFRZdvFFGD8CeS6J/metaculus-questions-suggest-money-will-do-more-good-in-the&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Update 2021-10-06: I believe I was overconfident in my original interpretations of these Metaculus questions. Some &lt;a href=&quot;https://forum.effectivealtruism.org/posts/dBFRZdvFFGD8CeS6J/metaculus-questions-suggest-money-will-do-more-good-in-the&quot;&gt;EA Forum commenters&lt;/a&gt; pointed out alternative interpretations of people’s answers that could allow us to draw orthogonal or opposite conclusions. For example, on question 1, Metaculus users might predict GiveWell’s top charities to drop off the list by 2031 not because better charities are discovered, but because current charities run out of room for more funding.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the giving now vs. later debate, a conventional argument in favor of giving now is that people become better off over time, so money spent later will do less good. But some have argued the opposite: as time passes, we learn more about how to do good, and therefore we should give later. (Or, alternatively, we should use our money now to try to accelerate the learning rate.)&lt;/p&gt;

&lt;p&gt;Metaculus provides some evidence that the second argument is the correct one: money spent later will do &lt;em&gt;more&lt;/em&gt; good than money spent now.&lt;/p&gt;

&lt;p&gt;This evidence comes from two Metaculus questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.metaculus.com/questions/4628/will-one-of-givewells-2019-top-charities-be-estimated-as-the-most-cost-effective-charity-in-2031/&quot;&gt;Will one of GiveWell’s 2019 top charities be estimated as the most cost-effective charity in 2031? &lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.metaculus.com/questions/4332/how-much-will-givewell-guess-it-will-cost-to-get-an-outcome-as-good-as-saving-a-life-at-the-end-of-2031/&quot;&gt;How much will GiveWell guess it will cost to get an outcome as good as saving a life, at the end of 2031?&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A brief explanation for each of these and why they matter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On question 1:&lt;/strong&gt; As of July 2021, Metaculus gives a 30% probability that one of GiveWell’s 2019 top charities will be ranked as the most cost-effective charity in 2031. That means a 70% chance that the 2031 charity will *not** be one of the 2019 recommendations. This could happen for two reasons: either the 2019 recommended charities run out of room for more funding, or GiveWell finds a charity that’s better than any of the 2019 recommendations. This at least weakly suggests that Metaculus users expect GiveWell to improve its recommendations over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On question 2:&lt;/strong&gt; Metaculus estimates that GiveWell’s top charity in 2031 will need to spend $430 per life saved equivalent (according to GiveWell’s own analysis). For comparison, in 2019, GiveWell estimated that its most cost-effective charity spends $592 per life saved equivalent. (These figures are adjusted for inflation.)&lt;/p&gt;

&lt;p&gt;As with question 1, this does not unambiguously show that GiveWell top charities are expected to improve over time. Perhaps instead Metaculus expects GiveWell’s estimate is currently too pessimistic, and it will converge on the true answer by 2031. But the cost reduction could also happen because GiveWell top charities truly get more effective over time.&lt;/p&gt;

&lt;p&gt;Some caveats:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;These Metaculus answers only represent the opinions of forecasters, not any formal analysis. (Some forecasters may have incorporated formal analyses into their predictions.)&lt;/li&gt;
  &lt;li&gt;Neither question directly asks whether money spent in 2031 will do more good than money spent now. (I don’t know how to operationalize a direct question like that. Please &lt;a href=&quot;mailto:contact@mdickens.me&quot;&gt;tell me&lt;/a&gt; if you have any ideas.)&lt;/li&gt;
  &lt;li&gt;These questions only ask about GiveWell top charities. Even if GiveWell recommendations become more effective over time, the same might not be true for other cause areas.&lt;/li&gt;
&lt;/ol&gt;



                </description>
			</item>
		
			<item>
				<title>Reverse-Engineering the Philanthropic Discount Rate</title>
				<pubDate>Fri, 09 Jul 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/07/09/reverse_engineering_discount_rate/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/07/09/reverse_engineering_discount_rate/</guid>
                <description>
                  
                  
                  
                  &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;How much of our resources should we spend now, and how much should we invest for the future? The correct balance is largely determined by how much we discount the future. A higher discount rate means we should spend more now; a lower discount rate tells us to spend more later.&lt;/li&gt;
  &lt;li&gt;In &lt;a href=&quot;https://mdickens.me/2020/07/03/estimating_discount_rate&quot;&gt;a previous essay&lt;/a&gt;, I directly estimated the philanthropic discount rate. Alternatively, We can reverse-engineer the philanthropic discount rate from typical investors’ discount rates if we know the difference between the two. &lt;a href=&quot;#introduction&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;In theory, people invest differently depending on what discount rate they use. We can estimate the typical discount rate by looking at historical investment performance. But the results vary depending on what data we look at. &lt;a href=&quot;#estimating-the-typical-discount-rate-delta&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;We can also look at surveys of experts’ beliefs on the discount rate, but it’s not clear how to interpret their answers. &lt;a href=&quot;#experts-beliefs-about-delta&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Then we need to know the difference between the typical and philanthropic discount rates. But it’s difficult to say to what extent philanthropists and typical investors disagree.   &lt;a href=&quot;#getting-from-the-typical-discount-rate-to-the-philanthropic-discount-rate&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Some additional details raise more concerns about the reliability of this methodology. &lt;a href=&quot;#additional-considerations&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Ultimately, it looks like we cannot effectively reverse-engineer the philanthropic discount rate, even if we spend substantially more effort on the problem. But under some conditions, we prefer to give later as long as we discount at a lower rate than non-philanthropists, which means we don’t need to make precise estimates. &lt;a href=&quot;#conclusion&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/eLemGWKYbADL6p8pb/reverse-engineering-the-philanthropic-discount-rate&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#estimating-the-typical-discount-rate-delta&quot; id=&quot;markdown-toc-estimating-the-typical-discount-rate-delta&quot;&gt;Estimating the typical discount rate \(\delta\)&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#deriving-delta-from-gdp-and-market-investment-return&quot; id=&quot;markdown-toc-deriving-delta-from-gdp-and-market-investment-return&quot;&gt;Deriving \(\delta\) from GDP and market investment return&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#how-should-we-measure-consumption&quot; id=&quot;markdown-toc-how-should-we-measure-consumption&quot;&gt;How should we measure consumption?&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#correlation-between-r-and-g&quot; id=&quot;markdown-toc-correlation-between-r-and-g&quot;&gt;Correlation between r and g&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#some-other-caveats&quot; id=&quot;markdown-toc-some-other-caveats&quot;&gt;Some other caveats&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#deriving-delta-from-endowment-portfolios&quot; id=&quot;markdown-toc-deriving-delta-from-endowment-portfolios&quot;&gt;Deriving \(\delta\) from endowment portfolios&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#deriving-delta-from-public-companies-shareholder-yield&quot; id=&quot;markdown-toc-deriving-delta-from-public-companies-shareholder-yield&quot;&gt;Deriving \(\delta\) from public companies’ shareholder yield&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#deriving-delta-from-gdp-growth&quot; id=&quot;markdown-toc-deriving-delta-from-gdp-growth&quot;&gt;Deriving \(\delta\) from GDP growth&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#experts-beliefs-about-delta&quot; id=&quot;markdown-toc-experts-beliefs-about-delta&quot;&gt;Experts’ beliefs about \(\delta\)&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#getting-from-the-typical-discount-rate-to-the-philanthropic-discount-rate&quot; id=&quot;markdown-toc-getting-from-the-typical-discount-rate-to-the-philanthropic-discount-rate&quot;&gt;Getting from the typical discount rate to the philanthropic discount rate&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#if-we-assume-factual-agreement--pure-time-preference&quot; id=&quot;markdown-toc-if-we-assume-factual-agreement--pure-time-preference&quot;&gt;If we assume factual agreement + pure time preference&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#what-is-the-rate-of-pure-time-preference&quot; id=&quot;markdown-toc-what-is-the-rate-of-pure-time-preference&quot;&gt;What is the rate of pure time preference?&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#non-factual-disagreements-other-than-pure-time-preference&quot; id=&quot;markdown-toc-non-factual-disagreements-other-than-pure-time-preference&quot;&gt;Non-factual disagreements other than pure time preference&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#if-we-dont-assume-factual-agreement&quot; id=&quot;markdown-toc-if-we-dont-assume-factual-agreement&quot;&gt;If we don’t assume factual agreement&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#additional-considerations&quot; id=&quot;markdown-toc-additional-considerations&quot;&gt;Additional considerations&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#rate-of-risk-aversion-eta&quot; id=&quot;markdown-toc-rate-of-risk-aversion-eta&quot;&gt;Rate of risk aversion (\(\eta\))&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#why-is-the-risk-free-rate-so-low&quot; id=&quot;markdown-toc-why-is-the-risk-free-rate-so-low&quot;&gt;Why is the risk-free rate so low?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix&quot; id=&quot;markdown-toc-appendix&quot;&gt;Appendix&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-a-optimal-consumption-under-uncertainty&quot; id=&quot;markdown-toc-appendix-a-optimal-consumption-under-uncertainty&quot;&gt;Appendix A: Optimal consumption under uncertainty&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-b-details-on-harvard-endowment-annual-report&quot; id=&quot;markdown-toc-appendix-b-details-on-harvard-endowment-annual-report&quot;&gt;Appendix B: Details on Harvard endowment annual report&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;&lt;a href=&quot;https://mdickens.me/2020/07/03/estimating_discount_rate/&quot;&gt;Last time&lt;/a&gt;, I attempted to estimate the philanthropic discount rate. We should discount according to the probability that our resources become worthless in the future. I identified three main reasons why this might happen: existential catastrophe, expropriation, and value drift. Then I estimated the probabilities of each of these individually.&lt;/p&gt;

&lt;p&gt;As an alternate approach, what if we reverse-engineer the philanthropic discount rate (call it \(\delta_P\)) from the typical discount rate (call it \(\delta\))?&lt;/p&gt;

&lt;p&gt;We can find \(\delta_P\) as a combination of:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;the typical discount rate \(\delta\)&lt;/li&gt;
  &lt;li&gt;the difference between \(\delta\) and \(\delta_P\)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For example, if we know that \(\delta = 2\%%\) and \(\delta_P\) is 0.5 percentage points less than \(\delta\), then \(\delta_P = 1.5\%\).&lt;/p&gt;

&lt;p&gt;This approach has some advantages estimating \(\delta_P\) directly. We can only speculate as to the risk of existential catastrophe, expropriation, or value drift. But we can empirically observe \(\delta\) by looking at financial markets. Then all we need to know is how philanthropists and ordinary investors differ.&lt;/p&gt;

&lt;h1 id=&quot;estimating-the-typical-discount-rate-delta&quot;&gt;Estimating the typical discount rate \(\delta\)&lt;/h1&gt;

&lt;p&gt;We can estimate most investors’ \(\delta\) by looking at investment returns. According to standard economic theory, we can determine \(\delta\) if we know four things:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The market rate of return&lt;/li&gt;
  &lt;li&gt;The volatility of the market&lt;/li&gt;
  &lt;li&gt;People’s appetites for risk&lt;/li&gt;
  &lt;li&gt;How much people consume vs. invest&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this section, I will avoid the mathematical details, but I provide the exact formula in &lt;a href=&quot;#appendix-a-optimal-consumption-under-uncertainty&quot;&gt;Appendix A&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s see if we can use this formula to estimate \(\delta\) using historical data.&lt;/p&gt;

&lt;p&gt;(Short answer: We can’t—it’s too hard to pin down the values of the four inputs.)&lt;/p&gt;

&lt;h2 id=&quot;deriving-delta-from-gdp-and-market-investment-return&quot;&gt;Deriving \(\delta\) from GDP and market investment return&lt;/h2&gt;

&lt;p&gt;At a national scale, the consumption rate \(g\) equals the GDP growth rate, and the investment rate represents the aggregate investment return achieved by capital markets. We can use these to estimate \(\delta\).&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://academic.oup.com/qje/article/134/3/1225/5435538#137577845&quot;&gt;The Rate of Return on Everything, 1870-2015&lt;/a&gt; (henceforth “RORE”) provides equity, bond, and housing returns, as well as GDPs, for 16 countries back as far as 1870.&lt;/p&gt;

&lt;p&gt;Table 1 includes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Real GDP growth \(g\)&lt;/li&gt;
  &lt;li&gt;Mean \(\mu\) and standard deviation \(\sigma\) of log real returns for the weighted combination of all investment assets in the country (weighting determined by RORE)&lt;/li&gt;
  &lt;li&gt;The derived value of \(\delta\) for various values of \(\eta\), where \(\eta\) is the rate at which marginal utility diminishes with consumption (also known as the rate of &lt;a href=&quot;https://en.wikipedia.org/wiki/Risk_aversion#Relative_risk_aversion&quot;&gt;relative risk aversion&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;div align=&quot;center&quot;&gt;&lt;strong&gt;Table 1:&lt;/strong&gt; RORE 1870–2015 by country&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;\(g\)&lt;/th&gt;
      &lt;th&gt;\(\mu\)&lt;/th&gt;
      &lt;th&gt;\(\sigma\)&lt;/th&gt;
      &lt;th&gt;\(\delta\)&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\eta\)&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;\(\eta\)=0.5&lt;/td&gt;
      &lt;td&gt;\(\eta\)=1&lt;/td&gt;
      &lt;td&gt;\(\eta\)=1.5&lt;/td&gt;
      &lt;td&gt;\(\eta\)=2&lt;/td&gt;
      &lt;td&gt;\(\eta\)=3&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;AUS&lt;/td&gt;
      &lt;td&gt;3.37&lt;/td&gt;
      &lt;td&gt;5.47&lt;/td&gt;
      &lt;td&gt;7.23&lt;/td&gt;
      &lt;td&gt;3.9&lt;/td&gt;
      &lt;td&gt;2.1&lt;/td&gt;
      &lt;td&gt;0.5&lt;/td&gt;
      &lt;td&gt;-1&lt;/td&gt;
      &lt;td&gt;-3.6&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;BEL&lt;/td&gt;
      &lt;td&gt;2.26&lt;/td&gt;
      &lt;td&gt;5.90&lt;/td&gt;
      &lt;td&gt;9.11&lt;/td&gt;
      &lt;td&gt;4.9&lt;/td&gt;
      &lt;td&gt;3.6&lt;/td&gt;
      &lt;td&gt;2.6&lt;/td&gt;
      &lt;td&gt;1.8&lt;/td&gt;
      &lt;td&gt;0.8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;CHE&lt;/td&gt;
      &lt;td&gt;2.45&lt;/td&gt;
      &lt;td&gt;5.29&lt;/td&gt;
      &lt;td&gt;6.82&lt;/td&gt;
      &lt;td&gt;4.1&lt;/td&gt;
      &lt;td&gt;2.8&lt;/td&gt;
      &lt;td&gt;1.7&lt;/td&gt;
      &lt;td&gt;0.6&lt;/td&gt;
      &lt;td&gt;-1.1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;DEU&lt;/td&gt;
      &lt;td&gt;3.83&lt;/td&gt;
      &lt;td&gt;6.14&lt;/td&gt;
      &lt;td&gt;6.32&lt;/td&gt;
      &lt;td&gt;4.3&lt;/td&gt;
      &lt;td&gt;2.3&lt;/td&gt;
      &lt;td&gt;0.4&lt;/td&gt;
      &lt;td&gt;-1.3&lt;/td&gt;
      &lt;td&gt;-4.6&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;DNK&lt;/td&gt;
      &lt;td&gt;2.74&lt;/td&gt;
      &lt;td&gt;6.93&lt;/td&gt;
      &lt;td&gt;6.43&lt;/td&gt;
      &lt;td&gt;5.6&lt;/td&gt;
      &lt;td&gt;4.2&lt;/td&gt;
      &lt;td&gt;2.9&lt;/td&gt;
      &lt;td&gt;1.7&lt;/td&gt;
      &lt;td&gt;-0.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ESP&lt;/td&gt;
      &lt;td&gt;3.02&lt;/td&gt;
      &lt;td&gt;4.10&lt;/td&gt;
      &lt;td&gt;8.62&lt;/td&gt;
      &lt;td&gt;2.7&lt;/td&gt;
      &lt;td&gt;1.1&lt;/td&gt;
      &lt;td&gt;-0.3&lt;/td&gt;
      &lt;td&gt;-1.6&lt;/td&gt;
      &lt;td&gt;-3.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;FIN&lt;/td&gt;
      &lt;td&gt;3.91&lt;/td&gt;
      &lt;td&gt;8.03&lt;/td&gt;
      &lt;td&gt;15.52&lt;/td&gt;
      &lt;td&gt;6.4&lt;/td&gt;
      &lt;td&gt;4.1&lt;/td&gt;
      &lt;td&gt;2.5&lt;/td&gt;
      &lt;td&gt;1.4&lt;/td&gt;
      &lt;td&gt;1.1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;FRA&lt;/td&gt;
      &lt;td&gt;2.38&lt;/td&gt;
      &lt;td&gt;4.48&lt;/td&gt;
      &lt;td&gt;8.75&lt;/td&gt;
      &lt;td&gt;3.4&lt;/td&gt;
      &lt;td&gt;2.1&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;0.1&lt;/td&gt;
      &lt;td&gt;-1.1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;GBR&lt;/td&gt;
      &lt;td&gt;2.32&lt;/td&gt;
      &lt;td&gt;4.28&lt;/td&gt;
      &lt;td&gt;8.02&lt;/td&gt;
      &lt;td&gt;3.2&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;-1.4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ITA&lt;/td&gt;
      &lt;td&gt;3.12&lt;/td&gt;
      &lt;td&gt;4.62&lt;/td&gt;
      &lt;td&gt;6.72&lt;/td&gt;
      &lt;td&gt;3.1&lt;/td&gt;
      &lt;td&gt;1.5&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;-1.4&lt;/td&gt;
      &lt;td&gt;-3.8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;JPN&lt;/td&gt;
      &lt;td&gt;4.29&lt;/td&gt;
      &lt;td&gt;5.04&lt;/td&gt;
      &lt;td&gt;7.45&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;0.8&lt;/td&gt;
      &lt;td&gt;-1.3&lt;/td&gt;
      &lt;td&gt;-3.3&lt;/td&gt;
      &lt;td&gt;-6.7&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;NLD&lt;/td&gt;
      &lt;td&gt;3.48&lt;/td&gt;
      &lt;td&gt;5.29&lt;/td&gt;
      &lt;td&gt;6.92&lt;/td&gt;
      &lt;td&gt;3.6&lt;/td&gt;
      &lt;td&gt;1.8&lt;/td&gt;
      &lt;td&gt;0.1&lt;/td&gt;
      &lt;td&gt;-1.4&lt;/td&gt;
      &lt;td&gt;-4.2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;NOR&lt;/td&gt;
      &lt;td&gt;3.28&lt;/td&gt;
      &lt;td&gt;6.64&lt;/td&gt;
      &lt;td&gt;7.26&lt;/td&gt;
      &lt;td&gt;5.1&lt;/td&gt;
      &lt;td&gt;3.4&lt;/td&gt;
      &lt;td&gt;1.8&lt;/td&gt;
      &lt;td&gt;0.3&lt;/td&gt;
      &lt;td&gt;-2.1&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;PRT&lt;/td&gt;
      &lt;td&gt;2.94&lt;/td&gt;
      &lt;td&gt;5.04&lt;/td&gt;
      &lt;td&gt;8.88&lt;/td&gt;
      &lt;td&gt;3.7&lt;/td&gt;
      &lt;td&gt;2.1&lt;/td&gt;
      &lt;td&gt;0.7&lt;/td&gt;
      &lt;td&gt;-0.4&lt;/td&gt;
      &lt;td&gt;-2.2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;SWE&lt;/td&gt;
      &lt;td&gt;3.06&lt;/td&gt;
      &lt;td&gt;6.77&lt;/td&gt;
      &lt;td&gt;8.62&lt;/td&gt;
      &lt;td&gt;5.3&lt;/td&gt;
      &lt;td&gt;3.7&lt;/td&gt;
      &lt;td&gt;2.3&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;-0.9&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;USA&lt;/td&gt;
      &lt;td&gt;3.12&lt;/td&gt;
      &lt;td&gt;5.51&lt;/td&gt;
      &lt;td&gt;8.39&lt;/td&gt;
      &lt;td&gt;4&lt;/td&gt;
      &lt;td&gt;2.4&lt;/td&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;-0.4&lt;/td&gt;
      &lt;td&gt;-2.4&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;div align=&quot;center&quot;&gt;&lt;strong&gt;Table 2:&lt;/strong&gt; RORE 1870–2015, summary statistics&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;\(g\)&lt;/th&gt;
      &lt;th&gt;\(\mu\)&lt;/th&gt;
      &lt;th&gt;\(\sigma\)&lt;/th&gt;
      &lt;th&gt;\(\delta\)&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\eta\)&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;\(\eta\)=0.5&lt;/td&gt;
      &lt;td&gt;\(\eta\)=1&lt;/td&gt;
      &lt;td&gt;\(\eta\)=1.5&lt;/td&gt;
      &lt;td&gt;\(\eta\)=2&lt;/td&gt;
      &lt;td&gt;\(\eta\)=3&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;median&lt;/td&gt;
      &lt;td&gt;3.09&lt;/td&gt;
      &lt;td&gt;5.38&lt;/td&gt;
      &lt;td&gt;7.74&lt;/td&gt;
      &lt;td&gt;3.95&lt;/td&gt;
      &lt;td&gt;2.2&lt;/td&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;-0.2&lt;/td&gt;
      &lt;td&gt;-2.15&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;mean&lt;/td&gt;
      &lt;td&gt;3.1&lt;/td&gt;
      &lt;td&gt;5.6&lt;/td&gt;
      &lt;td&gt;8.19&lt;/td&gt;
      &lt;td&gt;4.14&lt;/td&gt;
      &lt;td&gt;2.5&lt;/td&gt;
      &lt;td&gt;1.04&lt;/td&gt;
      &lt;td&gt;-0.24&lt;/td&gt;
      &lt;td&gt;-2.26&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(These \(\delta\)s are the median/mean values from Table 1, rather than calculated from the mean/median \(g, \mu, \sigma\).)&lt;/p&gt;

&lt;h3 id=&quot;how-should-we-measure-consumption&quot;&gt;How should we measure consumption?&lt;/h3&gt;

&lt;p&gt;In the above tables, I used GDP as the measure of consumption because it’s the most well-known metric (and the only consumption metric that comes with the RORE data set). But it’s not obvious that we should use GDP. Arguably, we should use &lt;a href=&quot;https://en.wikipedia.org/wiki/Net_national_income&quot;&gt;net national income&lt;/a&gt;, which excludes depreciation of manufactured capital. Or perhaps we should use GDP minus investment.&lt;/p&gt;

&lt;p&gt;Subtracting investment from GDP, leaving only the “consumption-y” part, reduces \(g\) by about 20% (&lt;a href=&quot;https://tcdata360.worldbank.org/indicators/inv.all.pct&quot;&gt;World Bank, 2020&lt;/a&gt;). This increases \(\delta\), and the magnitude of the increase depends on \(\eta\).&lt;/p&gt;

&lt;div align=&quot;center&quot;&gt;&lt;strong&gt;Table 3:&lt;/strong&gt; RORE 1870-2015, subtracting 20% from GDP&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;\(g\)&lt;/th&gt;
      &lt;th&gt;\(\mu\)&lt;/th&gt;
      &lt;th&gt;\(\sigma\)&lt;/th&gt;
      &lt;th&gt;\(\delta\)&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;\(\eta\)&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;\(\eta\)=0.5&lt;/td&gt;
      &lt;td&gt;\(\eta\)=1&lt;/td&gt;
      &lt;td&gt;\(\eta\)=1.5&lt;/td&gt;
      &lt;td&gt;\(\eta\)=2&lt;/td&gt;
      &lt;td&gt;\(\eta\)=3&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;median&lt;/td&gt;
      &lt;td&gt;2.47&lt;/td&gt;
      &lt;td&gt;5.38&lt;/td&gt;
      &lt;td&gt;7.74&lt;/td&gt;
      &lt;td&gt;4.25&lt;/td&gt;
      &lt;td&gt;2.9&lt;/td&gt;
      &lt;td&gt;1.65&lt;/td&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt;-0.3&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;mean&lt;/td&gt;
      &lt;td&gt;2.48&lt;/td&gt;
      &lt;td&gt;5.6&lt;/td&gt;
      &lt;td&gt;8.19&lt;/td&gt;
      &lt;td&gt;4.45&lt;/td&gt;
      &lt;td&gt;3.11&lt;/td&gt;
      &lt;td&gt;1.98&lt;/td&gt;
      &lt;td&gt;1.01&lt;/td&gt;
      &lt;td&gt;-0.42&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h3 id=&quot;correlation-between-r-and-g&quot;&gt;Correlation between r and g&lt;/h3&gt;

&lt;p&gt;According to this model, \(r_t\) is proportional to \(g_t\) for any period \(t\), so they should be perfectly correlated. That is to say, people’s spending in a particular period depends on how their investments performed. They’re willing to spend more when their investments perform better, and vice versa.&lt;/p&gt;

&lt;p&gt;In real life, \(r_t\) and \(g_t\) are only weakly correlated. In the RORE data set, they had a correlation of only 0.14 (&lt;a href=&quot;https://academic.oup.com/qje/article/134/3/1225/5435538#supplementary-data&quot;&gt;Online Appendix&lt;/a&gt; Q, Table A.26). So aggregate consumption has little to do with investment return.&lt;/p&gt;

&lt;p&gt;Our model predicts perfect correlation between \(r_t\) and \(g_t\), but this prediction is not even close to true. That casts doubt on the reliability of this model.&lt;/p&gt;

&lt;h3 id=&quot;some-other-caveats&quot;&gt;Some other caveats&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;These figures for market return represent gross return, not net. If we used net returns instead, we would get a higher value for \(\delta\) (assuming \(\eta &amp;gt; 1\)).&lt;/li&gt;
  &lt;li&gt;Most investors cannot practically invest in a representative sample of the entire market, because doing so requires buying diversified real estate holdings. A less-diversified portfolio would probably not have lower expected return, but it would have greater volatility, resulting in a lower value of \(\delta\) (assuming \(\eta &amp;gt; 1\)).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;deriving-delta-from-endowment-portfolios&quot;&gt;Deriving \(\delta\) from endowment portfolios&lt;/h2&gt;

&lt;p&gt;We can estimate \(\delta\) by looking at how universities manage their endowments. This approach has a few advantages:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Universities are legally required to publish their financial data.&lt;/li&gt;
  &lt;li&gt;Universities often intend for their endowments to last for centuries.&lt;/li&gt;
  &lt;li&gt;Large universities typically hire teams to manage their endowments, and these teams make intentional, strategic choices about how to spend endowment money over time. (Some of them might even explicitly use the same &lt;a href=&quot;#appendix-a-optimal-consumption-under-uncertainty&quot;&gt;model&lt;/a&gt; as I use to inform their decision-making.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s use the &lt;a href=&quot;https://finance.harvard.edu/annual-report&quot;&gt;Harvard endowment&lt;/a&gt; as an example. Harvard provides annual financial reports going back to fiscal year 2003.&lt;/p&gt;

&lt;p&gt;(To get a more complete perspective, we could combine data from multiple universities, but for now I will only look at the Harvard endowment.)&lt;/p&gt;

&lt;p&gt;Over 2003-2019, Harvard had an average annual consumption rate \(\lambda\) of 4.2%. If we define \(\lambda_G\) as the consumption rate when excluding gifts, we get an average \(\lambda_G\) of 3.2%. (For both \(\lambda\) and \(\lambda_G\), the median equaled the mean to within 0.1 percentage points). As explained in &lt;a href=&quot;#appendix-a-optimal-consumption-under-uncertainty&quot;&gt;Appendix A&lt;/a&gt;, we can use the consumption rate to derive the discount rate.&lt;/p&gt;

&lt;p&gt;See &lt;a href=&quot;#appendix-b-detail-on-harvard-endowment-annual-report&quot;&gt;Appendix B&lt;/a&gt; for the full table of yearly consumption rates.&lt;/p&gt;

&lt;p&gt;I measure both \(\lambda\) and \(\lambda_G\) because it is unclear how we should treat gifts. If we treat gifts simply as part of the portfolio, we get \(\lambda\). But arguably, if the endowment receives outside income in the form of gifts, that should not count as part of the portfolio, in which case we can use \(\lambda_G\) instead. Ideally, we would treat the gifts as coming from an external asset, determine the value of that asset, and count it as part of the endowment; but we don’t have enough data to properly do this.&lt;/p&gt;

&lt;p&gt;I calculate the consumption rate \(\lambda\) (or \(\lambda_G\)) using the expenditures for the current year divided by the value of the endowment at the end of the current year. Arguably, we should use the value of the endowment at the beginning of the year, because the university executives make budget decisions based on the endowment’s value at the beginning of the year, not the end. This method would give a median/mean \(\lambda\) of 4.4% and a median/mean \(\lambda_G\) of 3.3%/3.2%, respectively.&lt;/p&gt;

&lt;p&gt;The following table gives implied \(\delta\) for various values of \(\eta\), using \(\mu = 6\%, \sigma = 7\%\) (which approximately equal the long-run market return and standard deviation according to RORE).&lt;/p&gt;

&lt;div align=&quot;center&quot;&gt;&lt;strong&gt;Table 4:&lt;/strong&gt; Implied &amp;delta; from Harvard endowment statements&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;\(\lambda\)&lt;/th&gt;
      &lt;th&gt;0.5&lt;/th&gt;
      &lt;th&gt;1&lt;/th&gt;
      &lt;th&gt;1.5&lt;/th&gt;
      &lt;th&gt;2&lt;/th&gt;
      &lt;th&gt;3&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;3.2&lt;/td&gt;
      &lt;td&gt;4.7&lt;/td&gt;
      &lt;td&gt;3.3&lt;/td&gt;
      &lt;td&gt;1.9&lt;/td&gt;
      &lt;td&gt;0.7&lt;/td&gt;
      &lt;td&gt;-1.3&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;4.4&lt;/td&gt;
      &lt;td&gt;5.3&lt;/td&gt;
      &lt;td&gt;4.5&lt;/td&gt;
      &lt;td&gt;3.8&lt;/td&gt;
      &lt;td&gt;3.2&lt;/td&gt;
      &lt;td&gt;2.5&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Universities have sources of revenue other than the endowment (most notably including tuition and research funding). A proper analysis would consider all source of revenue and all expenditures. However, as with gifts, we cannot easily model these other revenue sources as capital assets, which makes it harder to deduce \(\delta\) using this model. The above analysis treats the endowment as an isolated system that doesn’t know anything about the rest of the university’s financial activities.&lt;/p&gt;

&lt;h2 id=&quot;deriving-delta-from-public-companies-shareholder-yield&quot;&gt;Deriving \(\delta\) from public companies’ shareholder yield&lt;/h2&gt;

&lt;p&gt;Let’s look at how publicly-traded companies make consumption choices. To simplify, when companies earn income, they can do two things with it: invest it back into the company (via R&amp;amp;D, acquisitions, etc.), or allow shareholders to “consume” it by paying out dividends or buybacks.&lt;/p&gt;

&lt;p&gt;In some sense, dividends don’t count as consumption, because shareholders might invest the money back into other assets. But it can still be considered consumption from the perspective of the company because the company can no longer invest with that money, and shareholders now have the money to use however they like. We could say that money feeds into shareholders’ utility functions, and the company is agnostic as to how shareholders use the money, even if they invest it.&lt;/p&gt;

&lt;p&gt;According to &lt;a href=&quot;https://www.yardeni.com/pub/buybackdiv.pdf&quot;&gt;Yardeni (2020), “S&amp;amp;P 500 Buybacks &amp;amp; Dividends”&lt;/a&gt;, the S&amp;amp;P 500 shareholder yield (that is, dividend yield plus buyback yield) from 1998 to 2020 has varied from about 2% to 7%, with an average around 4.5%. (Yardeni does not report exact figures; I’m estimating these based on the graph on page 9.)&lt;/p&gt;

&lt;p&gt;Shareholder yield equals dividends + buybacks divided by market capitalization. Is that really the right denominator to use? Market cap represents the discounted present value of the market, not the capital of the underlying companies per se. Arguably, we should use total assets on the denominator instead (calculating shareholder yield as &lt;code&gt;(dividends + buybacks) / assets&lt;/code&gt;), or perhaps balance sheet capital (which can be defined in several ways; one such definition is &lt;code&gt;property, plant, and equipment + current assets - debt in current liabilities - cash and short-term investments&lt;/code&gt;). Using any of these would give a higher consumption rate.&lt;/p&gt;

&lt;p&gt;I don’t believe any denominator based on a company’s balance sheet properly reflects the value of the capital a company possesses. Companies hold a great deal of value in non-balance-sheet assets such as human capital. In some sectors (such as technology), the majority of a company’s assets do not appear on the balance sheet. The market cap of a company represents something like the net value of all its capital, whether physical, human, or otherwise. I’m not entirely convinced that we should use market cap as the definition of “capital”, but it seems better than using anything else.&lt;/p&gt;

&lt;p&gt;If we assume a 4.5% shareholder yield, along with \(\mu = 5\%, \sigma = 17\%\) for the global equities market&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, we can construct this table of possible values of \(\delta\) depending on \(\eta\):&lt;/p&gt;

&lt;div align=&quot;center&quot;&gt;&lt;strong&gt;Table 5:&lt;/strong&gt; Implied &amp;delta; from public companies&apos; shareholder yield&lt;/div&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;\(\lambda\)&lt;/th&gt;
      &lt;th&gt;0.5&lt;/th&gt;
      &lt;th&gt;1&lt;/th&gt;
      &lt;th&gt;1.5&lt;/th&gt;
      &lt;th&gt;2&lt;/th&gt;
      &lt;th&gt;3&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;4.5&lt;/td&gt;
      &lt;td&gt;5.2&lt;/td&gt;
      &lt;td&gt;4.6&lt;/td&gt;
      &lt;td&gt;4.8&lt;/td&gt;
      &lt;td&gt;5.7&lt;/td&gt;
      &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The substantially higher volatility of equities (compared to the global market portfolio, which also includes bonds and real estate) results in a higher estimate for \(\delta\).&lt;/p&gt;

&lt;h2 id=&quot;deriving-delta-from-gdp-growth&quot;&gt;Deriving \(\delta\) from GDP growth&lt;/h2&gt;

&lt;p&gt;Liu (2012), &lt;a href=&quot;https://doi.org/10.1016/j.ecolecon.2011.11.007&quot;&gt;Inferring the rate of pure time preference under uncertainty&lt;/a&gt;, uses similar methodology to estimate \(\delta\) as a function of GDP growth, volatility in GDP growth, and the risk-free rate of return, calculated with US data 1889 to 1978 (taken from Mehra &amp;amp; Prescott (1985), &lt;a href=&quot;https://www.academicwebpages.com/preview/mehra/pdf/The%20Equity%20Premium%20A%20Puzzle.pdf&quot;&gt;The Equity Premium: A Puzzle&lt;/a&gt;). They find that, for \(\eta \le 1\), \(\delta\) “lies within ±1% from zero”; and for \(\eta &amp;gt; 1\), \(\delta\) “tends to be negative.”&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;experts-beliefs-about-delta&quot;&gt;Experts’ beliefs about \(\delta\)&lt;/h2&gt;

&lt;p&gt;Drupp et al. (2017), &lt;a href=&quot;http://eprints.whiterose.ac.uk/125845/1/Actual_resubmission_DiscountingDisentangled_AEJP_2017_R2.pdf&quot;&gt;Discounting Disentangled&lt;/a&gt;, presents a survey of experts asking their beliefs about the value of the discount rate.&lt;/p&gt;

&lt;p&gt;Before proceeding, we should clarify two terms: (1) the utility discount rate \(\delta\) and (2) the rate of pure time preference. A pure time preference represents the extent to which we discount future beings &lt;em&gt;merely&lt;/em&gt; because they live in the future. Many philosophers argue (and I would agree) that we should not admit any pure time preference. The utility discount rate \(\delta\) includes the rate of pure time preference, but it can also account for other reasons to discount the future, such as the possibility of extinction or expropriation.&lt;/p&gt;

&lt;p&gt;Unfortunately, Drupp et al. conflate between these two terms. In their survey, they explicitly ask respondents to estimate the “[r]ate of societal pure time preference (or utility discount rate)”, as if they mean the same thing. Based on the descriptions in the paper, the authors appear to assume that a pure time preference is the only reason to discount future utility. But it is unclear how survey respondents might interpret the prompt, and they probably interpret it in different ways, which makes it difficult to say how we can use the responses. The modal discount rate was 0%, which at least suggests that many participants interpreted the question as asking for the rate of pure time preference.&lt;/p&gt;

&lt;p&gt;Surveyed experts reported a mean discount rate of 1.1% and a median of 0.5%. It’s not clear whether we should interpret these as the utility discount rate or as the rate of pure time preference (or perhaps as a weighted average of the two, if some survey respondents used one interpretation and some used the other).&lt;/p&gt;

&lt;p&gt;We could also attempt to derive the utility discount rate from experts’ beliefs about the social discount rate \(r\) using this formula: \(\delta = r - \eta g + 1/2 \eta (1 + \eta) \sigma^2\) (taken from Gollier et al. (2016), &lt;a href=&quot;http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.959.8633&amp;amp;rep=rep1&amp;amp;type=pdf&quot;&gt;Declining discount rates: Economic justifications and implications for long-run policy&lt;/a&gt;, section 2.1). Using the median \(\sigma\) from RORE and experts’ median estimates gives a \(\delta\) of 1.0%. But this move seems questionable because we don’t know if surveyed experts would agree with this estimate for \(\sigma\).&lt;/p&gt;

&lt;h1 id=&quot;getting-from-the-typical-discount-rate-to-the-philanthropic-discount-rate&quot;&gt;Getting from the typical discount rate to the philanthropic discount rate&lt;/h1&gt;

&lt;p&gt;In the previous part, we attempted to estimate \(\delta\) using several methods. These methods did not produce consistent answers. But if we did manage to determine the value of \(\delta\), how to we then derive the philanthropic discount rate \(\delta_P\)?&lt;/p&gt;

&lt;p&gt;Most people behave as though they have a positive pure time preference. If we know the average rate of pure time preference (call it \(\rho\)), we can calculate \(\delta_P\) as \(\delta - \rho\). This gives us the “patient” discount rate, where we only discount due to empirical factors such as the probability of extinction, not due to a pure time preference.&lt;/p&gt;

&lt;p&gt;But well-informed philanthropists might substantially disagree with most people about things like the probability of extinction. In that case, we might estimate a different discount rate.&lt;/p&gt;

&lt;h2 id=&quot;if-we-assume-factual-agreement--pure-time-preference&quot;&gt;If we assume factual agreement + pure time preference&lt;/h2&gt;

&lt;p&gt;We can break down the discount rate into the patient discount rate plus the rate of pure time preference. The patient discount rate tells us the extent to which we should discount based on empirical factors such as the probability of extinction or the probability that we lose access to our funds. If we assume everyone agrees on the empirical discount rate, then we can determine the value of the philanthropic discount rate (which equals the patient discount rate) by taking the typical discount rate and subtracting the rate of pure time preference.&lt;/p&gt;

&lt;h3 id=&quot;what-is-the-rate-of-pure-time-preference&quot;&gt;What is the rate of pure time preference?&lt;/h3&gt;

&lt;p&gt;Unfortunately, research on the rate of pure time preference has produced mixed results.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://eprints.whiterose.ac.uk/125845/1/Actual_resubmission_DiscountingDisentangled_AEJP_2017_R2.pdf&quot;&gt;Discounting Disentangled&lt;/a&gt;, discussed previously, gave expert opinion on the discount rate, but it did not clearly distinguish between the utility discount rate and the rate of pure time preference.&lt;/p&gt;

&lt;p&gt;Frederick (2003), &lt;a href=&quot;https://doi.org/10.1023/A:1022298223127&quot;&gt;Measuring Intergenerational Time Preference: Are Future Lives Valued Less?&lt;/a&gt;, reviewed prior surveys that attempted to measure pure time preference. Then it conducted a series of new surveys, and found that people’s reported pure time preference varied depending on the exact question asked. Overall, average answers varied from 0% to 6% depending on survey wording. So it’s not clear whether most people exhibit a pure time preference at all, much less what rate they do exhibit. If most people do exhibit a 0% rate, that means the philanthropic discount rate equals the typical discount rate. But the two rates might differ by as much as six percentage points.&lt;/p&gt;

&lt;p&gt;For more on the problems with observing pure time preference, see Frederick et al. (2002), &lt;a href=&quot;https://www.jstor.org/stable/2698382&quot;&gt;Time Discounting and Time Preference: A Critical Review&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;non-factual-disagreements-other-than-pure-time-preference&quot;&gt;Non-factual disagreements other than pure time preference&lt;/h2&gt;

&lt;p&gt;Philanthropists and typical investors might discount at different rates for reasons other than pure time preference, even if they agree about all empirical facts. Philanthropists might naturally differ from typical investors for two main reasons:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Typical investors don’t care about value drift.&lt;/li&gt;
  &lt;li&gt;Typical investors &lt;em&gt;do&lt;/em&gt; care about dying.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Value drift occurs when an altruist becomes less altruistic over time. If I stop spending money to do good, then from an altruistic perspective, my money is being wasted. So I should discount my future spending based on &lt;a href=&quot;https://forum.effectivealtruism.org/posts/eRQe4kkkH2pPzqvam/review-of-value-drift-estimates-and-several-new-estimates&quot;&gt;the probability that this will happen&lt;/a&gt;. But self-interested investors shouldn’t behave this way. Any money that I freely spend must be in accordance with my values at the time (assuming I’m behaving rationally), so I still consider it valuable from a self-interested perspective.&lt;/p&gt;

&lt;p&gt;But unlike altruistic investors, typical investors don’t care as much about what happens to their money after they die. Many investors do want to ensure their children or grandchildren can lead good lives, but it often makes sense for them to discount the future based on the probability that they die before then. Whereas for an altruist, death does not much diminish the value of money. So this gives a reason why typical investors might discount the future when altruists don’t.&lt;/p&gt;

&lt;h2 id=&quot;if-we-dont-assume-factual-agreement&quot;&gt;If we don’t assume factual agreement&lt;/h2&gt;

&lt;p&gt;When I &lt;a href=&quot;https://mdickens.me/2020/07/03/estimating_discount_rate/&quot;&gt;directly estimated the philanthropic discount rate&lt;/a&gt;, I identified three key reasons to discount: extinction/global catastrophe, expropriation, and value drift. As discussed in the previous section, typical investors don’t care about value drift in the same way. In addition, many people in the effective altruism community have fairly unusual beliefs that might affect how they discount the future: namely, they pay more attention to existential risks. Many effective altruists assign much higher probabilities to existential catastrophes than most people do. However:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It’s possible that most sophisticated investors do generally agree with relatively pessimistic assessments of existential risk.&lt;/li&gt;
  &lt;li&gt;Perhaps most effective altruists &lt;em&gt;aren’t&lt;/em&gt; particularly pessimistic about x-risk, and only differentially prioritize it because they care more about the long-run future.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So effective altruists might discount the future more heavily based on their beliefs about x-risk, but this is not obviously the case.&lt;/p&gt;

&lt;p&gt;Disagreements about x-risk probably don’t substantially affect the overall discount rate. Even a fairly pessimistic 1% annual probability of existential catastrophe does not matter much in comparison to a &lt;a href=&quot;https://mdickens.me/2020/07/03/estimating_discount_rate/#value-drift&quot;&gt;10% value drift rate&lt;/a&gt;, or a 2% probability of dying.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h1 id=&quot;additional-considerations&quot;&gt;Additional considerations&lt;/h1&gt;

&lt;h2 id=&quot;rate-of-risk-aversion-eta&quot;&gt;Rate of risk aversion (\(\eta\))&lt;/h2&gt;

&lt;p&gt;As discussed previously, if we want to infer the discount rate from the market rate of return, we need to know how risk-averse most investors are. I presented estimates of the discount rate given various assumptions about risk aversion. But exactly how risk-averse are most investors?&lt;/p&gt;

&lt;p&gt;Unfortunately, as with all the other variables we tried to estimate in this essay, we cannot reliably pin down the rate of risk aversion \(\eta\). Different methodologies give different results.&lt;/p&gt;

&lt;p&gt;In &lt;a href=&quot;http://eprints.whiterose.ac.uk/125845/1/Actual_resubmission_DiscountingDisentangled_AEJP_2017_R2.pdf&quot;&gt;Discounting Disentangled&lt;/a&gt;, surveyed experts were asked to estimate the value of \(\eta\). They gave a mean of 1.35 and a median of 1.0 (a value of 1.0 corresponds to a logarithmic utility function), but their answers showed high variance.&lt;/p&gt;

&lt;p&gt;Studies of the relationship between income and happiness tend to find a roughly logarithmic relationship; for example, Stevenson &amp;amp; Wolfers (2013), &lt;a href=&quot;https://users.nber.org/~jwolfers/papers/Satiation(AER).pdf&quot;&gt;Subjective Well-Being and Income: Is There Any Evidence of Satiation?&lt;/a&gt; (DOI: 10.1257/aer.103.3.598).&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Gordon Irlam’s &lt;a href=&quot;https://www.aacalc.com/docs/relative_risk_aversion&quot;&gt;Estimating the Coefficient of Relative Risk Aversion for Consumption&lt;/a&gt; summarizes a range of attempts to determine the value of \(\eta\). Estimates vary greatly—Irlam claims that you could justifiably use a value anywhere between 1 and 4.&lt;/p&gt;

&lt;h2 id=&quot;why-is-the-risk-free-rate-so-low&quot;&gt;Why is the risk-free rate so low?&lt;/h2&gt;

&lt;p&gt;As of July 2021, &lt;a href=&quot;https://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=realyield&quot;&gt;real Treasury yields are negative&lt;/a&gt;: -1.60% for 5-year bonds and -0.21% for 30-year bonds. How do we square this with a positive time preference? Why would anyone with a positive discount rate ever invest in bonds?&lt;/p&gt;

&lt;p&gt;Weil (1989), &lt;a href=&quot;http://doi.org/10.3386/w2829&quot;&gt;The equity premium puzzle and the risk-free rate puzzle&lt;/a&gt;, describes this issue as the risk-free rate puzzle. More recent publications have attempted to resolve the puzzle, for example Maki &amp;amp; Sonoda (2010), &lt;a href=&quot;https://doi.org/10.1080/09603100010014492&quot;&gt;A solution to the equity premium and riskfree rate puzzles: an empirical investigation using Japanese data&lt;/a&gt;, which explains low risk-free rates as a product of trading costs. The existence of this puzzle somewhat invalidates the approach I used to derive the discount rate based on historical investment performance.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;To summarize what we’ve learned so far:&lt;/p&gt;

&lt;p&gt;In theory, we can reverse-engineer the philanthropic discount rate by estimating the typical discount rate and then subtracting the pure time preference.&lt;/p&gt;

&lt;p&gt;We can determine the typical discount rate from market data in various ways: by looking at broad market returns, behavior of endowment portfolios, public companies’ shareholder yield, or GDP growth. Or we can simply survey experts and ask them what they think.&lt;/p&gt;

&lt;p&gt;But when we attempt to reverse-engineer the philanthropic discount rate, we run into some problems:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Different methods to estimate the typical discount rate disagree with each other and have wide margins of error.&lt;/li&gt;
  &lt;li&gt;All of these methods require that we can accurately estimate how risk-averse people are, which we can’t. Attempts to estimate people’s risk aversion have produced inconsistent results.&lt;/li&gt;
  &lt;li&gt;We don’t know what rate of pure time preference most people use.&lt;/li&gt;
  &lt;li&gt;Philanthropists and typical investors might disagree about the discount rate for reasons other than pure time preference, and it’s hard to say how much they disagree.&lt;/li&gt;
  &lt;li&gt;Even if we can accurately estimate all of these parameters, this whole exercise relies on a theoretical model that doesn’t match reality—for example, it can’t explain why the risk-free rate is as low as it is.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Could we fix all of these problems? Perhaps. But some of these are open research areas that have already received substantial attention from academics, and still remain mysterious. It’s hard to say how much additional effort we would need to get a more realistic model with more accurate estimates.&lt;/p&gt;

&lt;p&gt;In conclusion, it doesn’t look like we can do a good job of reverse-engineering the philanthropic discount rate.&lt;/p&gt;

&lt;p&gt;Under some circumstances, we don’t care about the philanthropic discount rate; all we care about is the relationship between the philanthropic discount rate \(\delta_P\) and the typical discount rate \(\delta\). If patient philanthropists want to fund a particular cause that’s also funded by other actors, and if \(\delta_P &amp;lt; \delta\), then the cause may be over-funded according to the philanthropists’ discount rate. In that case, the philanthropists should invest all their money to give later. Sometimes we might have good reason to believe \(\delta_P &amp;lt; \delta\), even if we don’t know the exact difference. This greatly simplifies our decision, and we don’t need to precisely determine the philanthropic discount rate. For more on this, see Philip Trammell’s working paper &lt;a href=&quot;https://philiptrammell.com/static/PatienceAndPhilanthropy.pdf&quot;&gt;Dynamic Public Good Provision under Time Preference Heterogeneity&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; (especially section 3, “Interactions between patient and impatient funders”), and his &lt;a href=&quot;https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/&quot;&gt;80,000 Hours interview&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;appendix&quot;&gt;Appendix&lt;/h1&gt;

&lt;h2 id=&quot;appendix-a-optimal-consumption-under-uncertainty&quot;&gt;Appendix A: Optimal consumption under uncertainty&lt;/h2&gt;

&lt;p&gt;The &lt;a href=&quot;https://en.wikipedia.org/wiki/Ramsey%E2%80%93Cass%E2%80%93Koopmans_model&quot;&gt;Ramsey model&lt;/a&gt; defines the relationship between consumption and investment. Traditionally, this model assumes a fixed investment rate. In real life, nearly everyone invests at least some of their money in risky assets like stocks, and even “risk-free” assets like short-term Treasury bills still carry &lt;a href=&quot;https://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yield&quot;&gt;some degree of volatility&lt;/a&gt;. If we want to estimate the discount rate using real financial data, our model needs to incorporate uncertainty.&lt;/p&gt;

&lt;p&gt;Levhari and Srinivasan (1969)&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; provide a solution. First, some definitions&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Let \(c_t\) be consumption at time \(t\).&lt;/li&gt;
  &lt;li&gt;Let \(\delta\) be the discount rate.&lt;/li&gt;
  &lt;li&gt;Let \(\eta\) be the rate at which marginal utility diminishes with consumption.&lt;/li&gt;
  &lt;li&gt;Let \(r_t\) be the investment return at \(t\). \(\log(1 + r_t)\) follows a normal distribution parameterized by mean \(\mu\) and standard deviation \(\sigma\) (which is to say \(r_t\) follows a &lt;a href=&quot;https://en.wikipedia.org/wiki/Log-normal_distribution&quot;&gt;lognormal distribution&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Let \(k_t\) be the value of capital at \(t\). Capital grows according to \(k_{t+1} = (k_t - c_t)(1 + r_t)\).&lt;/li&gt;
  &lt;li&gt;Let \(u(c)\) be a utility function of consumption. Suppose we have an isoelastic utility function, that is,
\begin{align}
\displaystyle \begin{cases}{\frac {c^{1-\eta}-1}{1-\eta}} &amp;amp; \eta \neq 1 \\ \ln(c) &amp;amp; \eta =1\end{cases}
\end{align}&lt;/li&gt;
  &lt;li&gt;Our goal is to maximize total expected discounted utility
\begin{align}
E \left[\displaystyle\sum\limits_{t=0}^\infty (1 - \delta)^t u(c_t) \right]
\end{align}
(We discount by \((1 - \delta)^t\) instead of \(e^{-\delta t}\) because this model follows discrete time, not continuous. In the limiting case as the length of a time step approaches zero, these two discount factors are equal.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Under these conditions, the optimal consumption rate is a constant (call it \(\lambda\)). \(\lambda\) is given by&lt;/p&gt;

&lt;p&gt;\begin{align}
\lambda = 1 - \exp[\frac{-\delta}{\eta} + \frac{1-\eta}{\eta}(\mu + \sigma^2/2) - (1 - \eta)\sigma^2/2]
\end{align}&lt;/p&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;p&gt;\begin{align}
\lambda = 1 - \exp[\frac{-\delta}{\eta} + \frac{1-\eta}{\eta}\mu + \frac{1}{\eta}(1 - \eta)^2 \sigma^2/2]
\end{align}&lt;/p&gt;

&lt;p&gt;(Levhari and Srinivasan do not provide this exact result, but it easily follows from what they do give.)&lt;/p&gt;

&lt;p&gt;Using this equation, we can derive the value of \(\delta\) as a function of \(\lambda, \eta, \mu\), and \(\sigma\):&lt;/p&gt;

&lt;p&gt;\begin{align}
\delta = (1-\eta)\mu + (1 - \eta)^2\sigma^2/2 - \eta \log(1 - \lambda)
\end{align}&lt;/p&gt;

&lt;p&gt;If we do not know the value of \(\lambda\), we can derive it from the investment rate and consumption growth rate for any particular period.&lt;/p&gt;

&lt;p&gt;The consumption growth rate \(g(c_t)\) must equal the capital growth rate because consumption is a fixed proportion of capital. By definition, the capital growth rate equals \(\frac{k_{t+1} - k_t}{k_t} = (1 - \lambda)(1 + r_t) - 1\). If we can empirically observe \(g_t\) and \(r_t\), we can use these to derive \(\lambda\):&lt;/p&gt;

&lt;p&gt;\begin{align}
\lambda = 1 - \frac{1 + g_t}{1 + r_t}
\end{align}&lt;/p&gt;

&lt;p&gt;In theory, this equation must hold for all times, because the actor will decide how much to grow their consumption (\(g_t\)) based on how much their portfolio earned during that period (\(r_t\)). In practice, it doesn’t hold, but we’ll get to that later.&lt;/p&gt;

&lt;p&gt;Now combine our equations for \(\lambda\) and \(\delta\). As a simplifying assumption, suppose \(g_t = \log(1 + g_t)\) and \(r_t = \log(1 + r_t)\). Strictly speaking this is false, but it’s true in the limit as time steps become arbitrarily small.&lt;/p&gt;

&lt;p&gt;This gives us a new formula for \(\delta\):&lt;/p&gt;

&lt;p&gt;\begin{align}
\delta = \eta (r_t - g_t) + (1 - \eta)(\mu + \sigma^2/2) - \eta(1 - \eta)\sigma^2/2
\end{align}&lt;/p&gt;

&lt;p&gt;We can simplify this a bit further. Suppose we identify the period with median investment return \(r\) and median consumption growth \(g\) (these must occur in the same period because \(g\) is monotonic with \(r\)). That means \(r = \mu\) (strictly speaking, \(r = e^\mu - 1\), but again, this gives \(r = \mu\) when time steps are infinitesimally small). Now we can simplify the formula to&lt;/p&gt;

&lt;p&gt;\begin{align}
\delta = r + (1 - \eta)^2\sigma^2/2 - \eta g
\end{align}&lt;/p&gt;

&lt;p&gt;These equations allow us to derive \(\delta\) if we can observe the consumption rate \(\lambda\) or, failing that, we can derive the optimal consumption rate from the interest rate \(r_t\) and consumption growth rate \(g_t\) at a particular time \(t\).&lt;/p&gt;

&lt;h2 id=&quot;appendix-b-details-on-harvard-endowment-annual-report&quot;&gt;Appendix B: Details on Harvard endowment annual report&lt;/h2&gt;

&lt;p&gt;The columns in the table below are defined as follows. All numbers are in millions, except for \(\lambda\) and \(\lambda_G\), which represent percentages.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“Endowment”: The total value of the endowment&lt;/li&gt;
  &lt;li&gt;“Gifts”: Value of gifts made to the endowment&lt;/li&gt;
  &lt;li&gt;“Return”: The investment return of the endowment portfolio&lt;/li&gt;
  &lt;li&gt;“Expenditures”: Money withdrawn from the endowment to fund the university’s operations&lt;/li&gt;
  &lt;li&gt;“Savings”: Any portfolio return that’s not used for expenditures (= Return - Expenditures)&lt;/li&gt;
  &lt;li&gt;”\(\lambda\)”: Consumption rate (= Expenditures / Endowment)&lt;/li&gt;
  &lt;li&gt;”\(\lambda_G\)”: Consumption rate excluding gifts (= (Expenditures - Gifts) / (Endowment - Gifts))&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Endowment&lt;/th&gt;
      &lt;th&gt;Gifts&lt;/th&gt;
      &lt;th&gt;Return&lt;/th&gt;
      &lt;th&gt;Expenditures&lt;/th&gt;
      &lt;th&gt;Savings&lt;/th&gt;
      &lt;th&gt;\(\lambda\)&lt;/th&gt;
      &lt;th&gt;\(\lambda_G\)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;2003&lt;/td&gt;
      &lt;td&gt;19294&lt;/td&gt;
      &lt;td&gt;262&lt;/td&gt;
      &lt;td&gt;2059&lt;/td&gt;
      &lt;td&gt;770&lt;/td&gt;
      &lt;td&gt;1289&lt;/td&gt;
      &lt;td&gt;4&lt;/td&gt;
      &lt;td&gt;2.7&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2004&lt;/td&gt;
      &lt;td&gt;22587&lt;/td&gt;
      &lt;td&gt;257&lt;/td&gt;
      &lt;td&gt;3800&lt;/td&gt;
      &lt;td&gt;807&lt;/td&gt;
      &lt;td&gt;2993&lt;/td&gt;
      &lt;td&gt;3.6&lt;/td&gt;
      &lt;td&gt;2.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2005&lt;/td&gt;
      &lt;td&gt;25853&lt;/td&gt;
      &lt;td&gt;285&lt;/td&gt;
      &lt;td&gt;4044&lt;/td&gt;
      &lt;td&gt;854&lt;/td&gt;
      &lt;td&gt;3190&lt;/td&gt;
      &lt;td&gt;3.3&lt;/td&gt;
      &lt;td&gt;2.2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2006&lt;/td&gt;
      &lt;td&gt;29219&lt;/td&gt;
      &lt;td&gt;273&lt;/td&gt;
      &lt;td&gt;4113&lt;/td&gt;
      &lt;td&gt;933&lt;/td&gt;
      &lt;td&gt;3180&lt;/td&gt;
      &lt;td&gt;3.2&lt;/td&gt;
      &lt;td&gt;2.3&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2007&lt;/td&gt;
      &lt;td&gt;34912&lt;/td&gt;
      &lt;td&gt;277&lt;/td&gt;
      &lt;td&gt;6499&lt;/td&gt;
      &lt;td&gt;1043&lt;/td&gt;
      &lt;td&gt;5456&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;2.2&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2008&lt;/td&gt;
      &lt;td&gt;36926&lt;/td&gt;
      &lt;td&gt;336&lt;/td&gt;
      &lt;td&gt;2880&lt;/td&gt;
      &lt;td&gt;1201&lt;/td&gt;
      &lt;td&gt;1679&lt;/td&gt;
      &lt;td&gt;3.3&lt;/td&gt;
      &lt;td&gt;2.4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2009&lt;/td&gt;
      &lt;td&gt;26138&lt;/td&gt;
      &lt;td&gt;194&lt;/td&gt;
      &lt;td&gt;-9591&lt;/td&gt;
      &lt;td&gt;1416&lt;/td&gt;
      &lt;td&gt;-11007&lt;/td&gt;
      &lt;td&gt;5.4&lt;/td&gt;
      &lt;td&gt;4.7&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2010&lt;/td&gt;
      &lt;td&gt;27557&lt;/td&gt;
      &lt;td&gt;240&lt;/td&gt;
      &lt;td&gt;2630&lt;/td&gt;
      &lt;td&gt;1320&lt;/td&gt;
      &lt;td&gt;1310&lt;/td&gt;
      &lt;td&gt;4.8&lt;/td&gt;
      &lt;td&gt;4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2011&lt;/td&gt;
      &lt;td&gt;32012&lt;/td&gt;
      &lt;td&gt;212&lt;/td&gt;
      &lt;td&gt;5500&lt;/td&gt;
      &lt;td&gt;1321&lt;/td&gt;
      &lt;td&gt;4179&lt;/td&gt;
      &lt;td&gt;4.1&lt;/td&gt;
      &lt;td&gt;3.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2012&lt;/td&gt;
      &lt;td&gt;30745&lt;/td&gt;
      &lt;td&gt;226&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;1422&lt;/td&gt;
      &lt;td&gt;-1422&lt;/td&gt;
      &lt;td&gt;4.6&lt;/td&gt;
      &lt;td&gt;3.9&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2013&lt;/td&gt;
      &lt;td&gt;32689&lt;/td&gt;
      &lt;td&gt;222&lt;/td&gt;
      &lt;td&gt;3267&lt;/td&gt;
      &lt;td&gt;1499&lt;/td&gt;
      &lt;td&gt;1768&lt;/td&gt;
      &lt;td&gt;4.6&lt;/td&gt;
      &lt;td&gt;3.9&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2014&lt;/td&gt;
      &lt;td&gt;36429&lt;/td&gt;
      &lt;td&gt;512&lt;/td&gt;
      &lt;td&gt;4688&lt;/td&gt;
      &lt;td&gt;1539&lt;/td&gt;
      &lt;td&gt;3149&lt;/td&gt;
      &lt;td&gt;4.2&lt;/td&gt;
      &lt;td&gt;2.9&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2015&lt;/td&gt;
      &lt;td&gt;37615&lt;/td&gt;
      &lt;td&gt;338&lt;/td&gt;
      &lt;td&gt;1956&lt;/td&gt;
      &lt;td&gt;1594&lt;/td&gt;
      &lt;td&gt;362&lt;/td&gt;
      &lt;td&gt;4.2&lt;/td&gt;
      &lt;td&gt;3.4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2016&lt;/td&gt;
      &lt;td&gt;35665&lt;/td&gt;
      &lt;td&gt;491&lt;/td&gt;
      &lt;td&gt;-625&lt;/td&gt;
      &lt;td&gt;1706&lt;/td&gt;
      &lt;td&gt;-2331&lt;/td&gt;
      &lt;td&gt;4.8&lt;/td&gt;
      &lt;td&gt;3.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2017&lt;/td&gt;
      &lt;td&gt;37096&lt;/td&gt;
      &lt;td&gt;550&lt;/td&gt;
      &lt;td&gt;2651&lt;/td&gt;
      &lt;td&gt;1787&lt;/td&gt;
      &lt;td&gt;864&lt;/td&gt;
      &lt;td&gt;4.8&lt;/td&gt;
      &lt;td&gt;3.4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2018&lt;/td&gt;
      &lt;td&gt;39233&lt;/td&gt;
      &lt;td&gt;646&lt;/td&gt;
      &lt;td&gt;3332&lt;/td&gt;
      &lt;td&gt;1821&lt;/td&gt;
      &lt;td&gt;1511&lt;/td&gt;
      &lt;td&gt;4.6&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2019&lt;/td&gt;
      &lt;td&gt;40929&lt;/td&gt;
      &lt;td&gt;613&lt;/td&gt;
      &lt;td&gt;2326&lt;/td&gt;
      &lt;td&gt;1908&lt;/td&gt;
      &lt;td&gt;418&lt;/td&gt;
      &lt;td&gt;4.7&lt;/td&gt;
      &lt;td&gt;3.2&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Dimson, Marsh, and Staunton (2020). &lt;a href=&quot;https://www.credit-suisse.com/media/assets/corporate/docs/about-us/research/publications/credit-suisse-global-investment-returns-yearbook-2020-summary-edition.pdf&quot;&gt;Summary Edition Credit Suisse Global Investment Returns Yearbook 2020.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I don’t have data on the global shareholder yield, so I am naively assuming that it equals the yield of the S&amp;amp;P 500. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The author distinguishes between risk aversion and intertemporal substitutability of consumption, which I do not, so they present the results a little differently. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, the probability of dying &lt;a href=&quot;https://www.ssa.gov/oact/STATS/table4c6.html&quot;&gt;varies depending on age&lt;/a&gt;. 30-year olds only have about a 0.1% annual probability of death, increasing to 2% by age 70 and 5% by age 80. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The paper does not explicitly find a logarithmic relationship; it is simply attempting to argue that income continues to increase happiness on the margin, even at relatively high income levels. But the paper’s data is public available. &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/tree/master/happiness&quot;&gt;I fitted the data&lt;/a&gt; to a hyperbolic curve and found that it best fit at \(\eta = 0.86\), which is a little less risk-averse than a logarithmic utility function. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Accessed 2021-07-09. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Levhari and Srinivasan (1960). &lt;a href=&quot;https://doi.org/10.2307/2296834&quot;&gt;Optimal Savings Under Uncertainty.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;My definitions differ from Levhari and Srinivasan in a few ways:&lt;/p&gt;
      &lt;ol&gt;
        &lt;li&gt;They define \(r_t\) as one plus the interest rate, and I define it as the interest rate.&lt;/li&gt;
        &lt;li&gt;They use a discount factor \(\beta\), and I use a discount rate \(\delta\) (such that \(\beta = 1 - \delta\)).&lt;/li&gt;
        &lt;li&gt;They refer to the rate of diminishing marginal utility as \(\alpha\), and I refer to it as \(\eta\).&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>How Can We Increase the Frequency of Rare Insights?</title>
				<pubDate>Mon, 19 Apr 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/04/19/insights/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/04/19/insights/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;In many contexts, progress largely comes not from incremental progress, but from sudden and unpredictable insights. This is true at many different levels of scope—from one person’s current project, to one person’s life’s work, to the aggregate output of an entire field. But we know almost nothing about what causes these insights or how to increase their frequency.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to &lt;a href=&quot;https://www.lesswrong.com/posts/yyD5mQogGew4sJSs3/how-can-we-increase-the-frequency-of-rare-insights&quot;&gt;LessWrong&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;incremental-progress-vs-sudden-insights&quot;&gt;Incremental progress vs. sudden insights&lt;/h2&gt;

&lt;p&gt;To simplify, progress can come in one of two ways:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Incremental improvements through spending a long time doing hard work.&lt;/li&gt;
  &lt;li&gt;Long periods of no progress, interspersed with sudden flashes of insight.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Realistically, the truth falls somewhere between these two extremes. Some activities, like theorem-proving, look more like the second case; other activities, like transcribing paper records onto a computer, look more like the first. When Andrew Wiles proved Fermat’s Last Theorem, he had to go through the grind of writing a 200-page proof, but he also had to have sparks of insight to figure out how to bridge the missing gaps in the proof.&lt;/p&gt;

&lt;p&gt;The axis of incremental improvements vs. rare insights is mostly independent of the axis of easy vs. hard. A task can be sudden and easy, or incremental and hard. For example:&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;incremental work&lt;/th&gt;
      &lt;th&gt;sudden insights&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;easy&lt;/td&gt;
      &lt;td&gt;algebra homework&lt;/td&gt;
      &lt;td&gt;geometry homework&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;hard&lt;/td&gt;
      &lt;td&gt;building machine learning models&lt;/td&gt;
      &lt;td&gt;proving novel theorems&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Insofar as progress comes from “doing the work”, we know how to make progress. But insofar as it comes from rare insights, we don’t know.&lt;/p&gt;

&lt;h2 id=&quot;some-meditations-on-the-nature-of-insights&quot;&gt;Some meditations on the nature of insights&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://rootsofprogress.org/why-did-it-take-so-long&quot;&gt;Why did it take so long to invent X?&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&quot;feynman-on-finding-the-right-psychological-conditions&quot;&gt;Feynman on finding the right psychological conditions&lt;/h3&gt;

&lt;p&gt;Physicist Richard Feynman talks about this in &lt;a href=&quot;https://www.youtube.com/watch?v=mvqwm6RbxcQ&amp;amp;t=5m29s&quot;&gt;Take the World from Another Point of View&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;I worked out the theory of helium, once, and suddenly saw everything. I’d been struggling, struggling for two years, and suddenly saw everything at one time. […] And then you wonder, what’s the psychological condition? Well I know at that particular time, I simply looked up and I said wait a minute, it can’t be quite that difficult. It must be very easy. I’ll stand back, I’ll treat it very lightly, I’ll just tap it, and there it was! So how many times since then, I’m walking on the beach and I say, now look, it can’t be that complicated. And I’ll tap it, tap it, nothing happens.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Feynman tried to figure out what conditions lead to insights, but he “never found any correlations with anything.”&lt;/p&gt;

&lt;h3 id=&quot;p-vs-np&quot;&gt;P vs. NP&lt;/h3&gt;

&lt;p&gt;A pessimistic take would be that there’s basically no way to increase the probability of insights. Recognizing insights as obvious in retrospect is easy, but coming up with them is hard, and this is a fundamental mathematical fact about reality because &lt;a href=&quot;https://simple.wikipedia.org/wiki/P_versus_NP&quot;&gt;P != NP&lt;/a&gt; (probably). As Scott Aaronson &lt;a href=&quot;https://www.scottaaronson.com/blog/?p=122&quot;&gt;writes&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;If P=NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found. Everyone who could appreciate a symphony would be Mozart; everyone who could follow a step-by-step argument would be Gauss; everyone who could recognize a good investment strategy would be Warren Buffett. It’s possible to put the point in Darwinian terms: if this is the sort of universe we inhabited, why wouldn’t we already have evolved to take advantage of it?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I’m not quite so pessimistic. I agree with Scott Aaronson’s basic argument that solving problems is much harder than recognizing good solutions, but there might still be ways we could make it easier to solve problems.&lt;/p&gt;

&lt;h3 id=&quot;johnswentworth-on-problems-we-dont-understand&quot;&gt;johnswentworth on problems we don’t understand&lt;/h3&gt;

&lt;p&gt;The concept of sudden-insight problems relates to johnswentworth’s concept of &lt;a href=&quot;https://www.lesswrong.com/posts/CSZnj2YNMKGfsMbZA/specializing-in-problems-we-don-t-understand&quot;&gt;problems we don’t understand&lt;/a&gt;. Problems we don’t understand almost always require sudden insights, but problems that require sudden insights might be problems we understand (for example, proving theorems). johnswentworth proposes some types of learning that could help:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ul&gt;
    &lt;li&gt;Learn the gears of a system, so you can later tackle problems involving the system which are unlike any you’ve seen before. Ex.: physiology classes for doctors.&lt;/li&gt;
    &lt;li&gt;Learn how to think about a system at a high level, e.g. enough to do Fermi estimates or identify key bottlenecks relevant to some design problem. Ex.: intro-level fluid mechanics.&lt;/li&gt;
    &lt;li&gt;Uncover unknown unknowns, like pitfalls which you wouldn’t have thought to check for, tools you wouldn’t have known existed, or problems you didn’t know were tractable/intractable. Ex.: intro-level statistics, or any course covering NP-completeness.&lt;/li&gt;
  &lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;I would expect these types of learning to increase the rate of insights.&lt;/p&gt;

&lt;h2 id=&quot;learning-how-to-increase-the-frequency-of-insights&quot;&gt;Learning how to increase the frequency of insights&lt;/h2&gt;

&lt;p&gt;Insights happen less frequently under bad conditions: when you’re sleep-deprived, or malnourished, or stressed out, or distracted by other problems. Some actions can increase the probability of insights—for example, by studying the field and getting a good understanding of similar problems. But even under ideal conditions, insights are rare.&lt;/p&gt;

&lt;p&gt;Interestingly, most of the things that increase the frequency of insights, such as sleep and caffeine, also increase the speed at which you can do incremental work. It’s possible that these things speed up thinking, but don’t increase the probability that any particular thought is the “right” one.&lt;/p&gt;

&lt;p&gt;I can come up with one exception: you can (probably?) increase the frequency of insights on a problem if you understand a wide variety of problems and concepts. I don’t believe this does much to speed up incremental work, but it does make sudden insights more likely. Perhaps this happens because sudden insights often come from connecting two seemingly-unrelated ideas. I’ve heard some people recommend studying two disparate fields because you can use your knowledge of one field to bring a unique perspective to the other one.&lt;/p&gt;

&lt;p&gt;Overall, though, it seems to me that we as a society basically have no idea how to increase insights’ frequency beyond a basic low level.&lt;/p&gt;

&lt;p&gt;Instead of directly asking how to produce insights, we can ask how to &lt;em&gt;learn how&lt;/em&gt; to produce insights. If we wanted to learn more about what conditions produce insights, how might we do that? Could we formally study the conditions under which geniuses come up with genius ideas?&lt;/p&gt;

&lt;p&gt;If someone gave me a pile of money and asked me to figure out what conditions best promote insights, what would I do? I might start by recruiting a bunch of mathematicians and scientists to regularly report on their conditions along a bunch of axes: how long they slept, their stress level, etc. (I’d probably want to figure out some axes worth studying that we don’t already know much about, since we know that conditions (like sleep quality) do affect cognitive capacity.) Also have them report whenever they make some sort of breakthrough. If we collect enough high-quality data, we should be able to figure out what conditions work best, and disambiguate between factors that help provide insights and factors that “merely” increase cognitive capacity.&lt;/p&gt;

&lt;p&gt;I’m mostly just speculating here—I’m not sure the best way to study how to have insights. But it does seem like an important thing to know, and right now we understand very little about it.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some more specific examples from things I’ve worked on:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;Easy, incremental work: &lt;a href=&quot;https://mdickens.me/2021/04/05/comparison_of_DAF_providers/&quot;&gt;A Comparison of Donor-Advised Fund Providers&lt;/a&gt; just required me to read the websites of a bunch of providers and talk to them about their offerings.&lt;/li&gt;
        &lt;li&gt;Easy, sudden insight: &lt;a href=&quot;https://mdickens.me/2020/10/18/risk_of_concentrating/&quot;&gt;The Risk of Concentrating Wealth in a Single Asset&lt;/a&gt; comes from the simple but oft-overlooked observation that you can compare different investments using their certainty-equivalent returns. I probably spent dozens of hours thinking about how to evaluate investments before this occurred to me. (This still qualifies as easy in the grand scheme of things.)&lt;/li&gt;
        &lt;li&gt;Hard, incremental work: &lt;a href=&quot;https://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints/&quot;&gt;Asset Allocation and Leverage for Altruists with Constraints&lt;/a&gt; required me to set up an optimization problem, write a program to solve it, and run through various scenarios that could change the results.&lt;/li&gt;
        &lt;li&gt;Hard, sudden insight: &lt;a href=&quot;https://mdickens.me/2016/05/16/givewell&apos;s_charity_recommendations_require_taking_an_unusual_stance_on_population_ethics/&quot;&gt;GiveWell’s Charity Recommendations Require Taking a Controversial Stance on Population Ethics&lt;/a&gt; presents a fairly simple argument. But it was hard to come up with—my impression is that I had a low before-the-fact probability of thinking of it.&lt;/li&gt;
      &lt;/ul&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Investment Strategies for Donor-Advised Funds</title>
				<pubDate>Tue, 06 Apr 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/04/06/investing_strategies_DAF/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/04/06/investing_strategies_DAF/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;A &lt;a href=&quot;https://en.wikipedia.org/wiki/Donor-advised_fund&quot;&gt;donor-advised fund&lt;/a&gt; (DAF) is an investment account that allows you to take a tax deduction now and give the money to charity later. When you give money to a DAF, you can deduct that money just as you would deduct a charitable contribution. The DAF invests the money tax-free until you are ready to donate it to charity. But DAFs only allow limited investment options. How can we best make use of a DAF to optimize expected investment performance?&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#caveats&quot; id=&quot;markdown-toc-caveats&quot;&gt;Caveats&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-investment-options-do-dafs-provide&quot; id=&quot;markdown-toc-what-investment-options-do-dafs-provide&quot;&gt;What investment options do DAFs provide?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#investment-strategies&quot; id=&quot;markdown-toc-investment-strategies&quot;&gt;Investment strategies&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#baseline-global-market-portfolio&quot; id=&quot;markdown-toc-baseline-global-market-portfolio&quot;&gt;Baseline: global market portfolio&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#overweighting-emerging-markets&quot; id=&quot;markdown-toc-overweighting-emerging-markets&quot;&gt;Overweighting emerging markets&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#momentum-across-asset-classes&quot; id=&quot;markdown-toc-momentum-across-asset-classes&quot;&gt;Momentum across asset classes&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#leveraged-etfs-with-a-managed-account&quot; id=&quot;markdown-toc-leveraged-etfs-with-a-managed-account&quot;&gt;Leveraged ETFs (with a managed account)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#factor-funds-with-a-managed-account&quot; id=&quot;markdown-toc-factor-funds-with-a-managed-account&quot;&gt;Factor funds (with a managed account)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#concentrated-stock-picking-with-a-managed-account&quot; id=&quot;markdown-toc-concentrated-stock-picking-with-a-managed-account&quot;&gt;Concentrated stock picking (with a managed account)&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#even-more-advanced-ideas&quot; id=&quot;markdown-toc-even-more-advanced-ideas&quot;&gt;(Even more) advanced ideas&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#create-a-new-daf&quot; id=&quot;markdown-toc-create-a-new-daf&quot;&gt;Create a new DAF&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#launch-a-new-leveraged-fund&quot; id=&quot;markdown-toc-launch-a-new-leveraged-fund&quot;&gt;Launch a new leveraged fund&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#comparing-the-choices&quot; id=&quot;markdown-toc-comparing-the-choices&quot;&gt;Comparing the choices&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-more-ideas&quot; id=&quot;markdown-toc-appendix-more-ideas&quot;&gt;Appendix: More ideas&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#hire-an-advisor-and-let-them-decide-how-to-invest&quot; id=&quot;markdown-toc-hire-an-advisor-and-let-them-decide-how-to-invest&quot;&gt;Hire an advisor and let them decide how to invest&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#find-an-advisor-who-can-generate-alpha&quot; id=&quot;markdown-toc-find-an-advisor-who-can-generate-alpha&quot;&gt;Find an advisor who can generate alpha&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;caveats&quot;&gt;Caveats&lt;/h2&gt;

&lt;p&gt;Before we decide how to invest in a DAF, first we must ask: Should we use a DAF at all? Or should we keep our money in a taxable account? Or should we simply donate it all right away? The answer depends on the circumstances. I review some pros and cons of DAFs in &lt;a href=&quot;https://mdickens.me/2020/10/19/DAF_vs_taxable/&quot;&gt;Donor-Advised Funds vs. Taxable Accounts for Patient Donors&lt;/a&gt;. In this essay, I will assume we’ve already decided to use a DAF.&lt;/p&gt;

&lt;p&gt;I only cover DAFs in the United States because that’s what I’m familiar with. Many other countries probably have similar investment vehicles.&lt;/p&gt;

&lt;p&gt;I am not a lawyer or an investment advisor and none of this should be taken as legal or investment advice.&lt;/p&gt;

&lt;h2 id=&quot;what-investment-options-do-dafs-provide&quot;&gt;What investment options do DAFs provide?&lt;/h2&gt;

&lt;p&gt;The three most prominent donor-advised fund providers are &lt;a href=&quot;https://www.vanguardcharitable.org/&quot;&gt;Vanguard Charitable&lt;/a&gt;, &lt;a href=&quot;https://www.schwabcharitable.org/&quot;&gt;Schwab Charitable&lt;/a&gt;, and &lt;a href=&quot;https://www.fidelitycharitable.org/&quot;&gt;Fidelity Charitable&lt;/a&gt;. A fourth provider, &lt;a href=&quot;https://www.greaterhorizons.org/&quot;&gt;Greater Horizons&lt;/a&gt;, is worth including for reasons we will discuss below. All four DAF providers charge similar fees. Fees matter, but these four providers have similar enough fees that we can ignore them for now.&lt;/p&gt;

&lt;p&gt;The providers offer index funds covering the following markets:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Fidelity&lt;/th&gt;
      &lt;th&gt;Schwab&lt;/th&gt;
      &lt;th&gt;Vanguard&lt;/th&gt;
      &lt;th&gt;Greater Horizons&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US small-cap&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;international stocks&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Europe stocks&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Pacific stocks&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;emerging stocks&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US bonds&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;international bonds&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TIPS&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;money market&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;If your account has over $250,000, Fidelity and Schwab (but not Vanguard) allow you to appoint a financial advisor to manage the account. This advisor can invest in things other than the pre-selected funds, as long as they abide by certain restrictions (which are documented &lt;a href=&quot;https://www.schwabcharitable.org/public/file/P-8085389&quot;&gt;here&lt;/a&gt; for Schwab and &lt;a href=&quot;https://www.fidelitycharitable.org/content/dam/fc-public/docs/programs/ciap-investment-policies-and-guidelines.pdf&quot;&gt;here&lt;/a&gt; for Fidelity). Greater Horizons allows you to open an advisor-managed account with no minimum account size, and imposes fewer restrictions (see &lt;a href=&quot;https://www.greaterhorizons.org/documents/Financial-Advisor-Program-Guidelines.pdf&quot;&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;I believe Greater Horizons is the most likely to allow advisors to follow risky or unusual strategies&lt;sup id=&quot;fnref:37&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:37&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;. Possibly due to legal restrictions, none of the DAFs allow investors to use leverage via margin, futures, options, or swaps&lt;sup id=&quot;fnref:33&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:33&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; (but they do allow leveraged ETFs).&lt;/p&gt;

&lt;p&gt;For more what different DAF providers offer, see &lt;a href=&quot;https://mdickens.me/2021/04/05/comparison_of_DAF_providers/&quot;&gt;A Comparison of Donor-Advised Fund Providers&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;investment-strategies&quot;&gt;Investment strategies&lt;/h1&gt;

&lt;p&gt;Let’s review some possible investment strategies. This list includes all the ideas that I personally have considered, but I’m sure there are some good ideas that I don’t know about.&lt;/p&gt;

&lt;h2 id=&quot;baseline-global-market-portfolio&quot;&gt;Baseline: global market portfolio&lt;/h2&gt;

&lt;p&gt;If the efficient market hypothesis is strictly true, then we should invest in the &lt;a href=&quot;https://www.forbes.com/sites/phildemuth/2014/07/30/meet-the-global-market-portfolio-the-optimal-portfolio-for-the-average-investor/&quot;&gt;global market portfolio&lt;/a&gt;. The investment selections available in a Fidelity, Schwab, or Vanguard DAF don’t include every asset class in the world, but they do include most of the biggest ones. A DAF can get pretty close to the global market portfolio with something like this:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;30%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;international stocks&lt;/td&gt;
      &lt;td&gt;30%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US bonds&lt;/td&gt;
      &lt;td&gt;40%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;If you use Vanguard, you can include international bonds, so you can get even closer to the global market portfolio:&lt;/p&gt;

&lt;table&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;30%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;international stocks&lt;/td&gt;
      &lt;td&gt;30%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US bonds&lt;/td&gt;
      &lt;td&gt;20%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;international bonds&lt;/td&gt;
      &lt;td&gt;20%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Many altruistic investors can tolerate more risk than this. We can increase risk by removing bonds and only investing in stocks—something like 50% in a US stock fund and 50% in an international stock fund.&lt;/p&gt;

&lt;p&gt;Next, let’s look at a few strategies that might improve on this baseline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Caveat:&lt;/strong&gt; We can never know for certain how well an investment will perform in the future. For the rest of the strategies in this essay, we have at least some reason to expect them to outperform the global market portfolio in the long run, but nothing is guaranteed.&lt;/p&gt;

&lt;h2 id=&quot;overweighting-emerging-markets&quot;&gt;Overweighting emerging markets&lt;/h2&gt;

&lt;p&gt;In theory, the best way to increase expected return is to use leverage: borrow money to invest more than you have. But most DAFs don’t allow leverage. The only way to increase expected return is to tilt toward high-risk, high-return asset classes. We can do this by investing in all stocks and no bonds. Can we go further?&lt;/p&gt;

&lt;p&gt;To increase expected return, we can concentrate in the region of the world where equities have the highest expected return. To do so, we need to make an educated guess as to which region that is.&lt;/p&gt;

&lt;p&gt;We don’t have any perfect way to determine future expected returns, and we could take many different approaches. Several institutions provide return projections using various methodologies, and they generally agree that emerging market equities currently have higher expected returns than other asset classes (e.g., &lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/2021-Capital-Markets-Assumptions-for-Major-Asset-Classes&quot;&gt;AQR&lt;/a&gt;, &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation&quot;&gt;Research Affiliates&lt;/a&gt;, &lt;a href=&quot;https://advisors.vanguard.com/insights/article/marketperspectivesfebruary2021&quot;&gt;Vanguard&lt;/a&gt;). This higher expected return comes with higher risk. So we can simulate leverage by buying emerging market stocks.&lt;/p&gt;

&lt;p&gt;(Of the major DAF providers, only Vanguard offers an emerging markets index fund as a pre-selected option. Investors who want to hold emerging markets, but don’t want a professionally managed account, would need to use Vanguard.)&lt;/p&gt;

&lt;p&gt;How much higher of a return should we expect from emerging markets than from global equities? That depends on what methodology we use to estimate future returns. According to AQR, emerging markets have an expected excess return of about one percentage point, while Research Affiliates expects a three percentage point excess return (as of March 2021).&lt;/p&gt;

&lt;p&gt;The main reason for this difference is that Research Affiliates adjusts for stock market valuations, while AQR does not. Emerging market stocks look cheaper than stocks in developed markets. We know that cheap countries tend to outperform, sometimes by a lot (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2129474&quot;&gt;Faber, 2012&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://jpm.pm-research.com/content/early/2020/05/16/jpm.2020.1.160&quot;&gt;Radha, 2020&lt;/a&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;). More generally, cheap assets tend to outperform expensive assets both within and across markets (&lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf&quot;&gt;Asness et al., 2013&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;). There’s mixed evidence on what causes this, but it could happen due to a combination of home country bias (&lt;a href=&quot;https://alphaarchitect.com/2019/12/05/global-impact-of-investor-home-country-bias/&quot;&gt;Swedroe, 2019&lt;/a&gt;; &lt;a href=&quot;https://www.bridgewater.com/research-and-insights/geographic-diversification-can-be-a-lifesaver-yet-most-portfolios-are-highly-geographically-concentrated&quot;&gt;Saphier et al., 2019&lt;/a&gt;), common behavioral biases (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=227016&quot;&gt;Lakonishok et al., 1993&lt;/a&gt;&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;), and limitations on sophisticated investors that prevent them from correcting this mispricing (&lt;a href=&quot;http://faculty.tuck.dartmouth.edu/images/uploads/faculty/jonathan-lewellen/Institutions.pdf&quot;&gt;Lewellen, 2011&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;http://pages.stern.nyu.edu/~cedmond/phd/Shleifer%20Vishny%20JF%201997.pdf&quot;&gt;Shleifer &amp;amp; Vishny, 1997&lt;/a&gt;&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;). That said, Research Affiliates’ estimates might &lt;em&gt;over&lt;/em&gt;-correct for the valuations of different markets, so my own estimate for the expected return of emerging markets would fall somewhere between AQR’s and Research Affiliates’ estimates.&lt;/p&gt;

&lt;h2 id=&quot;momentum-across-asset-classes&quot;&gt;Momentum across asset classes&lt;/h2&gt;

&lt;p&gt;As discussed in the previous section, cheap assets tend to outperform expensive assets. Value isn’t the only investment factor that systematically predicts performance. There’s also the momentum factor: assets that have outperformed over the past 6-12 months tend to continue to outperform for a few months (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1919226&quot;&gt;Jedageesh &amp;amp; Titman, 2011&lt;/a&gt;&lt;sup id=&quot;fnref:29&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:29&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;http://schwert.ssb.rochester.edu/f532/ff_JF08.pdf&quot;&gt;Fama &amp;amp; French, 2008&lt;/a&gt;&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://www.aqr.com/-/media/AQR/Documents/Insights/White-Papers/The-Case-for-Momentum-Investing.pdf&quot;&gt;Berger et al., 2009&lt;/a&gt;&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;). Similarly, assets that have performed well on an absolute basis tend to continue to perform well (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2720600&quot;&gt;D’Souza et al., 2016&lt;/a&gt;&lt;sup id=&quot;fnref:30&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://arxiv.org/pdf/1404.3274.pdf&quot;&gt;Lempérière et al., 2014&lt;/a&gt;&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://www.aqr.com/Insights/Research/Working-Paper/Trends-Everywhere&quot;&gt;Babu et al., 2019&lt;/a&gt;&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=962461&quot;&gt;Faber, 2013&lt;/a&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;))—this is known as absolute momentum or trendfollowing.&lt;/p&gt;

&lt;p&gt;A DAF can’t invest in individual high-momentum stocks, but it can invest in whichever of the available funds show the strongest momentum. Gary Antonacci describes a version of this concept in his book, &lt;a href=&quot;https://www.amazon.com/gp/product/0071849440/&quot;&gt;Dual Momentum&lt;/a&gt;, and in a &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2042750&quot;&gt;paper&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;; a similar approach was studied in &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2720600&quot;&gt;D’Souza et al. (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:30:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;. (The strategy is called Dual Momentum because it uses both relative and absolute momentum.) Dual Momentum works like this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Start with three index funds: a US stock fund, an international stock fund, and a bond fund.&lt;/li&gt;
  &lt;li&gt;If both stock funds had negative total return over the prior 12 months, buy the bond fund.&lt;/li&gt;
  &lt;li&gt;Otherwise, buy whichever stock fund had higher return over the prior 12 months.&lt;/li&gt;
  &lt;li&gt;Rebalance monthly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In a &lt;a href=&quot;https://dualmomentum.net/2018/10/16/extended-backtest-of-global-equities-momentum/&quot;&gt;backtest from 1950 to 2018&lt;/a&gt;, Dual Momentum performed much better than a diversified buy-and-hold strategy:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Dual Momentum&lt;/th&gt;
      &lt;th&gt;Buy and Hold&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Return&lt;/td&gt;
      &lt;td&gt;15.8%&lt;/td&gt;
      &lt;td&gt;10.0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Stdev&lt;/td&gt;
      &lt;td&gt;11.5%&lt;/td&gt;
      &lt;td&gt;9.8%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Sharpe_ratio&quot;&gt;Sharpe Ratio&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;0.96&lt;/td&gt;
      &lt;td&gt;0.57&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Max Drawdown&lt;/td&gt;
      &lt;td&gt;-18%&lt;/td&gt;
      &lt;td&gt;-41%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;We don’t have to use US stocks + international stocks + bonds as our asset classes. A similar strategy would have performed well on many different combinations of asset classes. For example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1585517&quot;&gt;Faber (2010)&lt;/a&gt;&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; finds strong historical performance for a momentum strategy that rotates among US stocks + international stocks + REITs + commodities + bonds&lt;sup id=&quot;fnref:12:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://www-stat.wharton.upenn.edu/~steele/Courses/956/Resource/Momentum/MoskowitzGrinblatt99.pdf&quot;&gt;Moskowitz &amp;amp; Grinblatt (1999)&lt;/a&gt;&lt;sup id=&quot;fnref:38&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:38&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt; identify a momentum premium across industries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(I was able to replicate these results in and out of sample using data from the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt;, the &lt;a href=&quot;https://www.federalreserve.gov/datadownload/&quot;&gt;US Federal Reserve&lt;/a&gt;, and &lt;a href=&quot;https://www.aqr.com/Insights/Datasets/Commodities-for-the-Long-Run-Index-Level-Data-Monthly&quot;&gt;AQR’s commodity index&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;The momentum premium shows up &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf&quot;&gt;everywhere&lt;/a&gt;&lt;sup id=&quot;fnref:9:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;—it is the most consistent and robust market anomaly. For more detailed evidence that momentum-based (as well as valuation-based) asset allocation strategies work, including a discussion on expected future performance, see &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1079975&quot;&gt;Blitz &amp;amp; van Vliet (2008)&lt;/a&gt;&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;We could run an asset class momentum strategy in a DAF through Vanguard Charitable:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Consider five funds: US stock, Europe stock, Pacific stock, emerging stock, and US bonds&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
  &lt;li&gt;Out of these five funds, hold whichever two have the highest prior 12-month return.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;But&lt;/strong&gt;, if fewer than two funds had positive 12-month return, put the remaining money into a money market fund.&lt;/li&gt;
  &lt;li&gt;Rebalance monthly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Or we could run Dual Momentum in Fidelity or Schwab, using three asset classes (US stocks, international stocks, and bonds) instead of five.&lt;/p&gt;

&lt;p&gt;(There are lots of variations of asset class momentum: using a different set of funds; using a time horizon other than 12 months, such as 6 or 9 months; rebalancing quarterly instead of monthly; etc. Every variation has pros and cons. The important thing is to capture the momentum premium, and any reasonable implementation will do that.)&lt;/p&gt;

&lt;p&gt;What kind of future returns might we expect from asset class momentum?&lt;/p&gt;

&lt;p&gt;As a basic approach, let’s look at how well asset class momentum has performed in the past compared to buy-and-hold. This tells us the historical size of the “asset class momentum premium”. But the premium will likely shrink in the future&lt;sup id=&quot;fnref:39&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:39&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt;. If we roughly assume that asset class momentum will work half as well in the future as it has in the past and that the volatility won’t change, then we can expect asset class momentum to earn about a 5-6% net real return with 12% standard deviation&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Note: Asset class momentum strategies often move in and out of positions many times in a year. DAFs generally don’t want investors to trade excessively. If they believe you are trading too frequently, they might decide to limit your trades. I expect they wouldn’t do that, but it’s a risk worth considering.&lt;/p&gt;

&lt;h2 id=&quot;leveraged-etfs-with-a-managed-account&quot;&gt;Leveraged ETFs (with a managed account)&lt;/h2&gt;

&lt;p&gt;By default, DAF providers only allow you to buy funds from their pre-selected list. But most DAF providers allow you to appoint a professional investment manager. (Advisor-managed accounts typically have a $250,000 minimum, although Greater Horizons has no minimum.) That manager can invest however they want as long as they follow certain guidelines. They can’t get leverage via margin, futures, or options, but they’re allowed to buy leveraged ETFs.&lt;/p&gt;

&lt;p&gt;Leveraged ETFs tend to cost more than other types of leveraged investments. In &lt;a href=&quot;https://mdickens.me/2021/03/04/true_cost_of_leveraged_etfs/&quot;&gt;a previous essay&lt;/a&gt;, I estimated that leveraged ETFs on average cost about 2% plus the risk-free rate (although the exact number varies a lot over time). Investment managers typically charge around 1% for small(ish) investors and 0.5% for large investors, so holding leveraged ETFs in a managed account will cost 2.5 to 3 percentage points more than investing in a DAF’s pre-selected funds. Fidelity Charitable allows you to self-manage your DAF if you have over $5 million, so you could cut out the advisor fee.&lt;/p&gt;

&lt;p&gt;For truly risk-neutral investors, this is probably worth it. Under most plausible assumptions, leveraged ETFs have a high enough expected return to make up for the cost. For investors with logarithmic utility (which is probably still too aggressive for individual investors, but might be appropriate for altruists), the higher cost of leveraged ETFs probably cancels out their higher expected utility. (For details, see &lt;a href=&quot;https://mdickens.me/2021/03/04/true_cost_of_leveraged_etfs/&quot;&gt;The True Cost of Leveraged ETFs&lt;/a&gt;.)&lt;/p&gt;

&lt;h2 id=&quot;factor-funds-with-a-managed-account&quot;&gt;Factor funds (with a managed account)&lt;/h2&gt;

&lt;p&gt;The value and momentum factors have historically predicted stock performance. (There are some other factors as well, such as size, low-volatility, and quality. These aren’t as well-supported as value and momentum, but they might be worth incorporating.) Some ETFs and mutual funds provide exposure to these factors. With a managed account, your investment advisor could use factor funds to try to outperform the market.&lt;/p&gt;

&lt;p&gt;Factor funds (at least the good ones) are much cheaper than leveraged ETFs. To be worth buying, factor funds need to outperform the market after both fund expenses and the advisor’s fee. That’s probably achievable in expectation.&lt;/p&gt;

&lt;p&gt;You can run asset class momentum yourself (and skip the advisor’s fee), so it seems less likely that a factor fund portfolio can outperform asset class momentum after costs. But using an advisor means you don’t have to do any work, and the time savings might justify the extra cost.&lt;/p&gt;

&lt;h2 id=&quot;concentrated-stock-picking-with-a-managed-account&quot;&gt;Concentrated stock picking (with a managed account)&lt;/h2&gt;

&lt;p&gt;If we’re willing to take heavier bets on value and momentum, we can &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;concentrate our investments&lt;/a&gt; in a small number of value/momentum stocks (follow the link for a more detailed explanation). Backtests suggest that the top 10% of small-cap value and momentum stocks earned about double the return of the market before fees (with more than 1x the volatility of the market, but less than 2x). A portfolio of individual stocks would have to pay trading costs plus a management fee. This would probably cost less than a leveraged ETF, but definitely more than an un-managed DAF portfolio.&lt;/p&gt;

&lt;p&gt;How good is a concentrated value/momentum strategy? That depends on how well we expect the value and momentum factors to perform in the future. &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/#will-these-factors-continue-to-work&quot;&gt;It seems likely&lt;/a&gt; that they will continue to work at least to some extent, and the value factor in particular looks attractive relative to history (as of early 2021). Still, it’s difficult to forecast future performance. But even if a concentrated value/momentum strategy has zero risk-adjusted premium over the market, it should still outperform a leveraged ETF by having lower costs.&lt;/p&gt;

&lt;p&gt;A related approach: Instead of buying individual value/momentum stocks or a set of diversified factor funds, we could buy a concentrated mutual fund or ETF that looks particularly attractive. For example, as of this writing, the Russian stock market looks cheaper than almost any other market, so we could buy the &lt;a href=&quot;https://www.etf.com/ERUS&quot;&gt;iShares MSCI Russia ETF (ERUS)&lt;/a&gt;. Buying ERUS is the same basic strategy as buying an emerging market index fund as discussed previously, but it’s higher risk and has the potential to deliver a higher return. We can’t get as much concentration with an ETF as with a basket of stocks, but this approach has the advantage that we only need to buy one thing.&lt;/p&gt;

&lt;h1 id=&quot;even-more-advanced-ideas&quot;&gt;(Even more) advanced ideas&lt;/h1&gt;

&lt;h2 id=&quot;create-a-new-daf&quot;&gt;Create a new DAF&lt;/h2&gt;

&lt;p&gt;Legally, any registered charity can operate a DAF. (At least that’s my understanding from reading IRS guidelines.) The charity just needs to accept money from donors, invest that money, and allow donors to recommend grants to other charities. But it’s a lot of work to set up all the necessary infrastructure. A sufficiently motivated organization can do it if they believe they’ll get enough assets under management to justify the effort.&lt;/p&gt;

&lt;p&gt;Charities have to abide by certain legal restrictions about how they can invest, but the restrictions are not well-specified and I don’t understand them very well. A new DAF might be able to provide better investment options than any of the ones I’ve discussed so far, or it might not, depending on the legal situation.&lt;/p&gt;

&lt;h2 id=&quot;launch-a-new-leveraged-fund&quot;&gt;Launch a new leveraged fund&lt;/h2&gt;

&lt;p&gt;The leveraged ETFs on the market today have some problems that make them less than ideal for long-term investing. But a new leveraged ETF or mutual fund could fix these problems. In particular:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Existing leveraged ETFs charge high fees. Most people who buy leveraged ETFs use them for short-term trading, so they don’t care much about the fees. But these ETFs have high profit margins, so a few fund could undercut them while still making a profit.&lt;/li&gt;
  &lt;li&gt;Existing leveraged ETFs rebalance daily. A long-term risk-friendly investor doesn’t need their account to perfectly track the leveraged index on a daily basis. A new leveraged ETF could rebalance less frequently (perhaps monthly), which would reduce trading costs.&lt;/li&gt;
  &lt;li&gt;Existing leveraged ETFs focus on a single market or sector. This makes each individual fund riskier. It also means an investor who buys multiple leveraged ETFs will need to rebalance their holdings manually. But a new leveraged ETF could invest in the global market portfolio (or something like it). That way, long-term investors could buy a single fund and would never need to rebalance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Investors with a minimum of around $25 to $50 million could sponsor the creation of a new leveraged fund. Such a fund might be a better way to invest than any of the other options we’ve discussed.&lt;/p&gt;

&lt;h1 id=&quot;comparing-the-choices&quot;&gt;Comparing the choices&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Possible.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So far we’ve looked at seven investment strategies that might perform better than the global market portfolio:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Buy and hold an emerging markets index fund.&lt;/li&gt;
  &lt;li&gt;Run asset class momentum on the pre-selected asset classes.&lt;/li&gt;
  &lt;li&gt;With an investment manager, buy and hold leveraged ETFs.&lt;/li&gt;
  &lt;li&gt;With an investment manager, buy and hold some factor funds.&lt;/li&gt;
  &lt;li&gt;With an investment manager, invest in a concentrated basket of value/momentum stocks or a concentrated ETF.&lt;/li&gt;
  &lt;li&gt;Create a new DAF that offers more investment options.&lt;/li&gt;
  &lt;li&gt;Sponsor the creation of a new leveraged ETF or mutual fund.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I don’t know what legal restrictions would apply to a new DAF, so I can’t say much about that idea. Creating a new leveraged fund looks like the best option, but isn’t possible for most investors. Existing leveraged ETFs seem not worth buying. That leaves four choices: buying an emerging markets fund, running asset class momentum, buying factor funds, or buying a concentrated basket of stocks. Investors who do not meet the minimum requirement for a managed account (usually $250,000) must pick between the first two choices.&lt;/p&gt;

&lt;p&gt;When possible, I would prefer to invest in a concentrated basket of stocks. Historically, &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;they offered substantially higher returns than the market&lt;/a&gt;, and &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/#will-these-factors-continue-to-work&quot;&gt;I expect this to continue&lt;/a&gt; (although probably to a lesser extent). Buying and holding factor funds is less risky, but also has worse expected return. You’ll have a much easier time finding an investment advisor who’s willing to buy factor funds, so it might be a more realistic option. (Buying factor funds is a pretty normal thing that lots of advisors do. Buying a few dozen deep value/momentum stocks is a weird thing that almost nobody does.)&lt;/p&gt;

&lt;p&gt;Between the two strategies that do not require a professionally managed account—asset class momentum and an emerging markets index fund—I lean toward asset class momentum, for three reasons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Asset class momentum diversifies across multiple asset classes. All else equal, diversification is good.&lt;/li&gt;
  &lt;li&gt;According to backtests I ran, if we went back in time to 1990 and bought the asset class that performed best over the next 30 years, we still would have underperformed asset class momentum. Even with perfect foresight, asset class momentum still beat buy-and-hold.&lt;sup id=&quot;fnref:36&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;According to my (very unreliable) future return projections, asset class momentum has a higher expected return than emerging markets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I don’t put much credence in the last two points, but they provide at least &lt;em&gt;weak&lt;/em&gt; evidence in favor of asset class momentum.&lt;/p&gt;

&lt;p&gt;On the other hand, it seems hard to deny that emerging markets currently have a higher expected return than other parts of the world. Asset class momentum seems likely to continue working, but I’m not quite as confident about that. Holding an emerging markets funds has the added advantage that you never have to rebalance, so if you want to do as little work as possible, it might be a better choice.&lt;/p&gt;

&lt;h1 id=&quot;appendix-more-ideas&quot;&gt;Appendix: More ideas&lt;/h1&gt;

&lt;h2 id=&quot;hire-an-advisor-and-let-them-decide-how-to-invest&quot;&gt;Hire an advisor and let them decide how to invest&lt;/h2&gt;

&lt;p&gt;In general, this is a bad idea. Most investment advisors cannot outperform the market, and they’ll charge you a fee for essentially no added value. You’d be better off using the DAF’s pre-selected funds to build the global market portfolio (or get as close to it as you can).&lt;/p&gt;

&lt;p&gt;If your advisor is familiar with the research on value and momentum investing, they’re more likely to invest your money well.&lt;/p&gt;

&lt;h2 id=&quot;find-an-advisor-who-can-generate-alpha&quot;&gt;Find an advisor who can generate alpha&lt;/h2&gt;

&lt;p&gt;It’s not too hard to find an investment advisor who can build you a portfolio of value/momentum factor funds that’s better than the global market portfolio. But it’s much more difficult to find someone who can generate true &lt;em&gt;alpha&lt;/em&gt; on top of those factors. A few such people probably exist, but they’re nearly impossible to identify before the fact. In general, I don’t think it’s worth trying, although there might be exceptions.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:37&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I spoke to representatives at each donor-advised fund and to a few financial advisors who manage DAFs. Based on those conversations, it appears that Greater Horizons offers the most flexibility. &lt;a href=&quot;#fnref:37&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:33&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Unfortunately, the legal requirements are not clear. A DAF provider has a fiduciary duty to make &lt;a href=&quot;https://www.investopedia.com/terms/p/prudent-investor-rule.asp&quot;&gt;prudent investments&lt;/a&gt;, and it’s largely up to courts’ interpretation as to what this means. DAF providers disallow investments that they think might violate fiduciary duty. For more, see the &lt;a href=&quot;https://www.fdic.gov/regulations/examinations/trustmanual/section_3/fdic_section_3-asset_management.html&quot;&gt;FDIC Trust Examination Manual: Investment Principles, Policies and Products&lt;/a&gt;. &lt;a href=&quot;#fnref:33&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Faber (2012). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2129474&quot;&gt;Global Value: Building Trading Models with the 10 Year CAPE.&lt;/a&gt; For more detail, see Faber’s book on the same subject, &lt;a href=&quot;https://www.amazon.com/Global-Value-Bubbles-Crashes-Returns-ebook/dp/B00J351PXE/&quot;&gt;Global Value: How to Spot Bubbles, Avoid Market Crashes, and Earn Big Returns in the Stock Market&lt;/a&gt;. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Radha (2020). &lt;a href=&quot;https://jpm.pm-research.com/content/early/2020/05/16/jpm.2020.1.160&quot;&gt;Using CAPE to Forecast Country Returns for Designing an International Country Rotation Portfolio.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Asness, Moskowitz &amp;amp; Pedersen (2013). &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf&quot;&gt;Value and Momentum Everywhere.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:9:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lakonishok, Shleifer &amp;amp; Vishny (1993). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=227016&quot;&gt;Contrarian Investment, Extrapolation, and Risk.&lt;/a&gt; &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lewellen (2011). &lt;a href=&quot;http://faculty.tuck.dartmouth.edu/images/uploads/faculty/jonathan-lewellen/Institutions.pdf&quot;&gt;Institutional Investors and the Limits of Arbitrage.&lt;/a&gt; &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Shleifer &amp;amp; Vishny (1997). &lt;a href=&quot;http://pages.stern.nyu.edu/~cedmond/phd/Shleifer%20Vishny%20JF%201997.pdf&quot;&gt;The Limits of Arbitrage.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:29&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jedageesh &amp;amp; Titman (2011). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1919226&quot;&gt;Momentum.&lt;/a&gt; &lt;a href=&quot;#fnref:29&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Fama &amp;amp; French (2008). &lt;a href=&quot;http://schwert.ssb.rochester.edu/f532/ff_JF08.pdf&quot;&gt;Dissecting Anomalies.&lt;/a&gt; &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Berger, Israel &amp;amp; Moskowitz (2009). &lt;a href=&quot;https://www.aqr.com/-/media/AQR/Documents/Insights/White-Papers/The-Case-for-Momentum-Investing.pdf&quot;&gt;The Case for Momentum Investing.&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:30&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;D’Souza,, Srichanachaichok, Wang &amp;amp; Yao (2016). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2720600&quot;&gt;The Enduring Effect of Time-Series Momentum on Stock Returns Over Nearly 100-Years.&lt;/a&gt; &lt;a href=&quot;#fnref:30&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:30:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lempérière, Deremble, Seager, Potters &amp;amp; Bouchaud (2014). &lt;a href=&quot;https://arxiv.org/pdf/1404.3274.pdf&quot;&gt;Two Centuries of Trend Following.&lt;/a&gt; &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Babu, Levine, Ooi, Pedersen &amp;amp; Stamelos (2019). &lt;a href=&quot;https://www.aqr.com/Insights/Research/Working-Paper/Trends-Everywhere&quot;&gt;Trends Everywhere.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Faber (2013). &lt;a href=&quot;https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=962461&quot;&gt;A Quantitative Approach to Tactical Asset Allocation.&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Antonacci (2016). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2042750&quot;&gt;Risk Premia Harvesting Through Dual Momentum.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Faber (2010). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1585517&quot;&gt;Relative Strength Strategies for Investing.&lt;/a&gt; &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:12:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:38&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Moskowitz &amp;amp; Grinblatt (1999). &lt;a href=&quot;http://www-stat.wharton.upenn.edu/~steele/Courses/956/Resource/Momentum/MoskowitzGrinblatt99.pdf&quot;&gt;Do Industries Explain Momentum?&lt;/a&gt; &lt;a href=&quot;#fnref:38&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Blitz &amp;amp; van Vliet (2008). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1079975&quot;&gt;Global Tactical Cross-Asset Allocation: Applying Value and Momentum Across Asset Classes.&lt;/a&gt; &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This list does not include the international stock fund because that fund is simply the sum of the Europe, Pacific, and emerging stock funds. Unlike Antonacci’s Dual Momentum, I treated bonds as a risky asset. I believe Antonacci’s backtest makes bonds look better than they are because bond yields consistently decreased for most of the tested period. It makes more sense to use a money market fund as the risk-free asset. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:39&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We should expect the asset class momentum premium to shrink for two main reasons:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Bond yields are lower than they were in the backtest period. That reduces how much return an asset class momentum strategy can earn when it’s not holding stocks.&lt;/li&gt;
        &lt;li&gt;Momentum strategies have become more popular, which could cause them to perform worse going forward.&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;The first point seems hard to dispute. Regarding the second point, as a simple rule of thumb, we could assume the premium will decline by half&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt;. Relevant research:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2156623&quot;&gt;McLean &amp;amp; Pontiff (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt; found that published anomalies historically shrunk by 58% after publication, of which 26% could be attributed to data mining, and 32% to markets becoming more efficient.&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3774514&quot;&gt;Jensen, Kelly &amp;amp; Pedersen (2021)&lt;/a&gt;&lt;sup id=&quot;fnref:40&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:40&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;26&lt;/a&gt;&lt;/sup&gt; expanded McLean &amp;amp; Pontiff’s research and found about a 1/3 decline post-publication, all of which could be attributed to data mining.&lt;/li&gt;
        &lt;li&gt;&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2912287&quot;&gt;Blitz (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt; found that ETFs do not harvest the momentum premium&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;28&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;Asset class momentum appears more likely than stock momentum to survive increasing popularity because (1) it trades larger, more liquid positions, and (2) most momentum investors use stock momentum, not asset class momentum. So arguably we should make a more optimistic assumption. &lt;a href=&quot;#fnref:39&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I came to this expected return figure as follows:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;As a baseline, assume Dual Momentum will perform as well as the average of the US and international stock markets. If we take the median of AQR’s and Research Affiliates’ projections, we get an expected return of 3.4%.&lt;/li&gt;
        &lt;li&gt;Based on backtests, the historical Dual Momentum premium was about 6%. Halve this to get 3% and add to the baseline performance for a total expected return of 6.4%.&lt;/li&gt;
        &lt;li&gt;Subtract 1% for transaction costs to get 5.4%.&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:36&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I backtested two different pairs of asset classes: (US stocks, global ex-US stocks) and (US stocks, Japan stocks). I specifically chose Japan because Japan had the weakest performance of any major stock market over the sample period. If asset class momentum over (US, Japan) outperformed US stocks alone, that would give the strongest possible evidence in favor of asset class momentum that my data set could produce. And indeed, it turned out that asset class momentum did (slightly) outperform. I don’t consider this strong evidence in an absolute sense, but it’s the best I can do given the data I have access to (I used the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt;). &lt;a href=&quot;#fnref:36&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In &lt;a href=&quot;https://www.aqr.com/Insights/Perspectives/How-Can-a-Strategy-Still-Work-If-Everyone-Knows-About-It&quot;&gt;How Can a Strategy Everyone Knows About Still Work? &lt;/a&gt;, Cliff Asness suggests making this “conservative” assumption. &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;McLean &amp;amp; Pontiff (2016). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2156623&quot;&gt;Does Academic Research Destroy Stock Return Predictability?&lt;/a&gt; &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:40&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jensen, Kelly &amp;amp; Pedersen (2021). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3774514&quot;&gt;Is There a Replication Crisis in Finance?&lt;/a&gt; &lt;a href=&quot;#fnref:40&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Blitz (2017). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2912287&quot;&gt;Are Exchange-Traded Funds Harvesting Factor Premiums?&lt;/a&gt; &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We might be interested in how hedge funds behave rather than ETFs, because hedge funds probably represent more ‘savvy’ investors. I am not aware of any research on whether hedge funds in aggregate exploit the momentum premium. But another 2017 study by Blitz, &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2898034&quot;&gt;Are Hedge Funds on the Other Side of the Low-Volatility Trade?&lt;/a&gt;, found that hedge funds systematically &lt;em&gt;reverse&lt;/em&gt; load on the low volatility premium (that is, they take the losing side of the trade), which weakly suggests they may do the same for momentum. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>A Comparison of Donor-Advised Fund Providers</title>
				<pubDate>Mon, 05 Apr 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/04/05/comparison_of_DAF_providers/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/04/05/comparison_of_DAF_providers/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Last updated 2025-05-09.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A &lt;a href=&quot;https://en.wikipedia.org/wiki/Donor-advised_fund&quot;&gt;donor-advised fund&lt;/a&gt; (DAF) is an investment account that lets you take a tax deduction now and give the money to charity later. When you give money to a DAF, you can deduct that money just as you would deduct a charitable donation. The DAF invests the money tax-free. At any time, you can write a grant from your DAF to a charity of your choice.&lt;/p&gt;

&lt;p&gt;You can open a DAF through a donor-advised fund provider. A provider charges an administrative fee to invest your DAF and make donations when you recommend them.&lt;/p&gt;

&lt;p&gt;For donors in the United States,&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; which DAF provider is best?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The short answer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All the big DAF providers offer similar features. For most people, it doesn’t matter much which one you choose.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If you already have a DAF, you might as well keep using it.&lt;/li&gt;
  &lt;li&gt;If you have a brokerage account at Fidelity, Schwab, or Vanguard, then the easiest thing to do is to open a DAF with your brokerage account. That way, you can manage all your investments in one place.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Otherwise, this handy flowchart can help you choose a DAF provider that fits your preferences.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/assets/images/DAF-flowchart-v5.png&quot;&gt;&lt;img src=&quot;/assets/images/DAF-flowchart-v5.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The long answer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That flowchart might not cover everything you care about, and it doesn’t offer nuance. In the rest of this post, let’s look in detail at how DAF providers compare.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/qYuehBsAe6Ri6PZvL/a-comparison-of-donor-advised-fund-providers&quot;&gt;Effective Altruism Forum.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#my-process&quot; id=&quot;markdown-toc-my-process&quot;&gt;My process&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#comparison&quot; id=&quot;markdown-toc-comparison&quot;&gt;Comparison&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#fees&quot; id=&quot;markdown-toc-fees&quot;&gt;Fees&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#minimums&quot; id=&quot;markdown-toc-minimums&quot;&gt;Minimums&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#investment-choices&quot; id=&quot;markdown-toc-investment-choices&quot;&gt;Investment choices&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#default-investments&quot; id=&quot;markdown-toc-default-investments&quot;&gt;Default investments&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#daffy-custom-portfolios&quot; id=&quot;markdown-toc-daffy-custom-portfolios&quot;&gt;Daffy custom portfolios&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#advisor-managed-accounts&quot; id=&quot;markdown-toc-advisor-managed-accounts&quot;&gt;Advisor-managed accounts&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#contributing-complex-assets&quot; id=&quot;markdown-toc-contributing-complex-assets&quot;&gt;Contributing complex assets&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#user-experience&quot; id=&quot;markdown-toc-user-experience&quot;&gt;User experience&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#how-risky-are-new-daf-providers&quot; id=&quot;markdown-toc-how-risky-are-new-daf-providers&quot;&gt;How risky are new DAF providers?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#the-best-daf-providers&quot; id=&quot;markdown-toc-the-best-daf-providers&quot;&gt;The best DAF provider(s)&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#more-on-fidelity-aef-nptrust-and-t-rowe-price&quot; id=&quot;markdown-toc-more-on-fidelity-aef-nptrust-and-t-rowe-price&quot;&gt;More on Fidelity, AEF, NPTrust, and T. Rowe Price&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-table-of-administrative-fees&quot; id=&quot;markdown-toc-appendix-table-of-administrative-fees&quot;&gt;Appendix: Table of administrative fees&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#disclaimers&quot; id=&quot;markdown-toc-disclaimers&quot;&gt;Disclaimers&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#changelog&quot; id=&quot;markdown-toc-changelog&quot;&gt;Changelog&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;my-process&quot;&gt;My process&lt;/h1&gt;

&lt;p&gt;I made a list of every United States nationwide DAF provider I could find. I excluded regional DAF providers (example: Silicon Valley Community Foundation), providers that only support certain causes (example: National Christian Foundation), and providers that don’t work with individual donors (example: American Online Giving Foundation). Your local community foundation might offer a better DAF than any of the national providers of my list, but there are too many community foundations for me to look at them all.&lt;/p&gt;

&lt;p&gt;I ended up with nine DAF providers (in alphabetical order):&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;American Endowment Foundation (AEF)&lt;/li&gt;
  &lt;li&gt;Charityvest (see &lt;a href=&quot;#disclaimers&quot;&gt;disclaimer&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;Daffy (see &lt;a href=&quot;#disclaimers&quot;&gt;disclaimer&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;Fidelity Charitable&lt;/li&gt;
  &lt;li&gt;Greater Horizons&lt;/li&gt;
  &lt;li&gt;National Philanthropic Trust (NPTrust)&lt;/li&gt;
  &lt;li&gt;Schwab Charitable&lt;/li&gt;
  &lt;li&gt;T. Rowe Price Charitable&lt;/li&gt;
  &lt;li&gt;Vanguard Charitable&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This list is probably not comprehensive, but it’s all the DAF providers I could find that meet my criteria.&lt;/p&gt;

&lt;p&gt;I spoke to representatives at these providers to fill in some gaps in my knowledge. I also spoke to a few DAF users and financial advisors who manage DAFs at different providers.&lt;/p&gt;

&lt;p&gt;I then eliminated four DAF providers:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;T. Rowe Price Charitable funds have excessively high fees (0.6% or higher).&lt;/li&gt;
  &lt;li&gt;AEF and NPTrust both charge higher administrative fees than Vanguard/Schwab, offer a worse user experience, and don’t have any special features to compensate.&lt;/li&gt;
  &lt;li&gt;Fidelity is nearly identical to Schwab, but it’s slightly worse in a few ways (for more, see &lt;a href=&quot;#more-on-fidelity-aef-nptrust-and-t-rowe-price&quot;&gt;here&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Five DAF providers remain: Charityvest, Daffy, Greater Horizons, Schwab, and Vanguard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Caveat 1&lt;/strong&gt;: I don’t have good firsthand knowledge of any of these DAF providers except for Fidelity Charitable (which I used to use) and Greater Horizons (which I currently use). I created accounts at Charityvest, Daffy, Schwab, and Vanguard to get a sense of how they work, but I haven’t tried to do anything fancy like set up an advisor-managed account. I had to make subjective judgments on things like UI, so don’t take my claims as definitive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Caveat 2&lt;/strong&gt;: This article is not about whether you should open a DAF in the first place. You might want to keep your money in a taxable account, or maybe donate it all right away. But if you’ve already decided you want a DAF, then I hope this article will help you choose a provider.&lt;/p&gt;

&lt;h1 id=&quot;comparison&quot;&gt;Comparison&lt;/h1&gt;

&lt;p&gt;Let’s compare Vanguard, Schwab, Greater Horizons, Daffy, and Charityvest on these questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;What fees do they charge? &lt;a href=&quot;#fees&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Do they have contribution/grant minimums? &lt;a href=&quot;#minimums&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;What investment choices do they offer? &lt;a href=&quot;#investment-choices&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Do they have reasonable default investment options? &lt;a href=&quot;#default-investments&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;How flexible are their advisor-managed accounts? &lt;a href=&quot;#advisor-managed-accounts&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Can you contribute complex assets such as cryptocurrency or real estate? &lt;a href=&quot;#contributing-complex-assets&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;How good is the user experience? &lt;a href=&quot;#user-experience&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;How risky are the newer DAF providers (Daffy and Charityvest)? &lt;a href=&quot;#how-risky-are-new-daf-providers&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;fees&quot;&gt;Fees&lt;/h3&gt;

&lt;p&gt;Minimum fees:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Charityvest has no minimum fee for cash accounts, and a minimum annual fee of $100 for accounts that hold stocks and bonds.&lt;/li&gt;
  &lt;li&gt;Daffy has three tiers of accounts, with minimums of $36, $60, and $240 per year.&lt;/li&gt;
  &lt;li&gt;Greater Horizons has a minimum annual fee of $500.&lt;/li&gt;
  &lt;li&gt;Schwab has no minimum annual fee.&lt;/li&gt;
  &lt;li&gt;Vanguard has a minimum annual fee of $250.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href=&quot;https://guide.charityvest.org/for-donors/fees-and-expenses&quot;&gt;Charityvest&lt;/a&gt;, Greater Horizons&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, &lt;a href=&quot;https://www.schwabcharitable.org/features/fees-and-minimums&quot;&gt;Schwab&lt;/a&gt;, and &lt;a href=&quot;https://www.vanguardcharitable.org/giving-with-vc/fees-and-minimums&quot;&gt;Vanguard&lt;/a&gt; offer tiered fee structures based on the account value. Daffy charges a monthly fee but no percentage-based fee. See &lt;a href=&quot;#appendix-table-of-administrative-fees&quot;&gt;Appendix&lt;/a&gt; for full details on the administrative fees for each provider.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.daffy.org/membership&quot;&gt;Daffy&lt;/a&gt; has a unique pricing structure. It offers three plans: a basic account for $3/month, a family account for $5/month, and a premium account for $20/month, and has no percentage-based fee.&lt;/p&gt;

&lt;p&gt;Daffy is the least expensive DAF provider if you have a basic account with at least $6,000 or a premium account with at least $40,000. But beware that Daffy does not allow you to contribute more than a &lt;strong&gt;lifetime&lt;/strong&gt; total of $25,000 in non-cash assets to the basic plan, or $50,000 to the family plan. If you make $75,000 and you &lt;a href=&quot;http://givingwhatwecan.org/&quot;&gt;donate 10% of your income&lt;/a&gt;, you’ll hit the basic plan’s limit in 4 years and the family plan’s limit in 7 years.&lt;/p&gt;

&lt;p&gt;Daffy’s $20/month plan has no limit. You can keep $10 million in your DAF and still only pay $20/month.&lt;/p&gt;

&lt;p&gt;If you only use a DAF as a convenient way to donate appreciated stock (or other assets), and you don’t plan on keeping money in the DAF long-term, you want a DAF with no minimum fee. Your best options are Schwab and Charityvest:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Schwab has no minimum fee.&lt;/li&gt;
  &lt;li&gt;Charityvest has a basic plan with no minimum fee, but with the basic plan you can’t hold any investments in your account, only cash. (You can still donate non-cash assets.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In brief:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Daffy’s basic account is the cheapest of all the providers if you have at least $6,000. Its premium account is the cheapest if you have at least $40,000.&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Schwab is the cheapest for accounts with less than $6,000 because it’s the only provider with no minimum fee. (Unless you have a Charityvest cash-only account.)&lt;/li&gt;
  &lt;li&gt;For accounts with over $16,667, the cheapest (other than Daffy basic) is a three-way tie between Charityvest, Fidelity, and Schwab up to account values of $1 million.&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;Above $1 million, the cheapest might be Charityvest, Schwab, or Vanguard, depending on exactly how much money you have, but the differences between the three are small.&lt;/li&gt;
  &lt;li&gt;If you have enough money (probably starting in the tens of millions), you can negotiate lower fees.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Daffy and Charityvest are startups, which makes them &lt;a href=&quot;#how-risky-are-new-daf-providers&quot;&gt;riskier&lt;/a&gt;. If we exclude the two startup-y DAF providers:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If you have less than $25,000, Schwab is cheapest.&lt;/li&gt;
  &lt;li&gt;If you have between $25,000 and $1 million, the three providers (Greater Horizons, Schwab, Vanguard) charge the same fees.&lt;/li&gt;
  &lt;li&gt;For larger accounts, Vanguard is cheapest.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;minimums&quot;&gt;Minimums&lt;/h3&gt;

&lt;p&gt;Donor-advised funds have three types of minimums:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Minimum contribution to create an account&lt;/li&gt;
  &lt;li&gt;Minimum additional contribution&lt;/li&gt;
  &lt;li&gt;Minimum grant size&lt;/li&gt;
  &lt;li&gt;Minimum size for an advisor-managed account&lt;/li&gt;
&lt;/ol&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Charityvest&lt;/th&gt;
      &lt;th&gt;Daffy&lt;/th&gt;
      &lt;th&gt;Greater Horizons&lt;/th&gt;
      &lt;th&gt;Schwab&lt;/th&gt;
      &lt;th&gt;Vanguard&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Account Min&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;$25,000&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Contribution Min&lt;/td&gt;
      &lt;td&gt;$15&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;$5,000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Grant Min&lt;/td&gt;
      &lt;td&gt;$15&lt;/td&gt;
      &lt;td&gt;$18&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;$50&lt;/td&gt;
      &lt;td&gt;$500&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Advisor Min&lt;/td&gt;
      &lt;td&gt;$1 million&lt;/td&gt;
      &lt;td&gt;N/A&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;N/A&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h3 id=&quot;investment-choices&quot;&gt;Investment choices&lt;/h3&gt;

&lt;p&gt;If you don’t know much about investing or you want to invest in whatever default your DAF provider uses, skip to the &lt;a href=&quot;#default-investments&quot;&gt;next section&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s compare the investment options for &lt;a href=&quot;https://guide.charityvest.org/for-donors/portfolio-options&quot;&gt;Charityvest&lt;/a&gt;, &lt;a href=&quot;https://www.greaterhorizons.org/wp-content/uploads/2021/01/Investment-Performance-Supplemental-4Q2020-GH.pdf&quot;&gt;Greater Horizons&lt;/a&gt;, &lt;a href=&quot;https://www.schwabcharitable.org/investment-options&quot;&gt;Schwab&lt;/a&gt;, and &lt;a href=&quot;https://www.vanguardcharitable.org/investments&quot;&gt;Vanguard&lt;/a&gt;. Daffy provides many more investment options than the other providers, so I discuss Daffy in &lt;a href=&quot;#daffy-custom-portfolios&quot;&gt;its own section&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The DAF providers offer index funds covering the following asset classes.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Charityvest&lt;/th&gt;
      &lt;th&gt;Greater Horizons&lt;/th&gt;
      &lt;th&gt;Schwab&lt;/th&gt;
      &lt;th&gt;Vanguard&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US small-cap&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;international stocks&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;emerging market stocks&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US bonds&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;international bonds&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TIPS&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;money market&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
      &lt;td&gt;&lt;img src=&quot;/assets/images/green-checkmark.png&quot; class=&quot;checkmark-image&quot; /&gt;&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Vanguard offers funds covering many other market segments, including US growth/value, US &lt;a href=&quot;https://en.wikipedia.org/wiki/Real_estate_investment_trust&quot;&gt;REITs&lt;/a&gt;, European stocks, Pacific stocks, corporate bonds, dividend growth stocks, and commodities. Vanguard absolutely has the best variety in terms of index fund offerings.&lt;/p&gt;

&lt;p&gt;The expense ratios on the funds themselves are low enough not to matter as long as you stick with the passively-managed funds. (Vanguard and Schwab offer a few actively-managed funds with higher fees, which you should avoid.) You can use any of these DAF providers to construct a globally diversified portfolio for an average expense ratio of about 0.05%.&lt;/p&gt;

&lt;p&gt;Vanguard and Charityvest investors can approximately replicate the global market portfolio with 30% US stocks, 30% international stocks, 20% US bonds, and 20% international bonds. Investors with Schwab or Greater Horizons don’t have access to international bonds, so the closest they can get is something like 30% US stocks, 30% international stocks, 40% US bonds.&lt;/p&gt;

&lt;p&gt;Vanguard, Daffy, Charityvest, and Greater Horizons offer a few &lt;a href=&quot;https://en.wikipedia.org/wiki/Environmental,_social_and_corporate_governance&quot;&gt;ESG&lt;/a&gt; funds with low expense ratios. Schwab also has ESG funds, but they’re unreasonably expensive.&lt;/p&gt;

&lt;p&gt;If you want to invest in cryptocurrency, you can either do that through an &lt;a href=&quot;#advisor-managed-accounts&quot;&gt;advisor-managed account&lt;/a&gt;&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;, or you can invest in a cryptocurrency fund through Daffy. Daffy is the only provider that &lt;a href=&quot;https://www.daffy.org/portfolios#crypto&quot;&gt;offers crypto funds&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In summary:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If you want the global market portfolio, use Vanguard or Charityvest because they offer an international bond index.&lt;/li&gt;
  &lt;li&gt;If you want ESG funds, use any except Schwab.&lt;/li&gt;
  &lt;li&gt;If you want cryptocurrency (and you don’t want an advisor-managed account), use Daffy.&lt;/li&gt;
  &lt;li&gt;If you want a wide diversity of options, use Vanguard.&lt;/li&gt;
  &lt;li&gt;Otherwise, any provider is a good choice.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;default-investments&quot;&gt;Default investments&lt;/h3&gt;

&lt;p&gt;Each of the five DAF providers offers a few default investment allocations. The defaults cater to investors with different levels of risk tolerance (conservative/balanced/aggressive).&lt;/p&gt;

&lt;p&gt;Schwab’s &lt;a href=&quot;https://www.schwabcharitable.org/resource-center/insights/investment-options&quot;&gt;default investments&lt;/a&gt; are not good because they charge unconscionably high fees.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; Schwab does offer some low-fee options (listed in the &lt;a href=&quot;#investment-choices&quot;&gt;previous section&lt;/a&gt;), so if you use Schwab, make sure you choose their low-fee funds instead of the defaults.&lt;/p&gt;

&lt;p&gt;The other four providers—Charityvest, Daffy, Vanguard, and Greater Horizons—charge low fees on their default investments. So if you use one of those providers, you don’t need to worry about fees.&lt;/p&gt;

&lt;p&gt;All five providers offer ESG portfolios. Again, though, Schwab’s ESG funds are unreasonably expensive.&lt;/p&gt;

&lt;p&gt;All four providers’ default funds allocate &lt;a href=&quot;https://mdickens.me/2017/03/26/do_investors_put_too_much_stock_in_the_us/&quot;&gt;too much to US stocks&lt;/a&gt; relative to the global market portfolio. I personally wouldn’t use a default allocation because I’d want to invest more into international stocks. Every provider lets you choose your own allocation if you want to.&lt;/p&gt;

&lt;h3 id=&quot;daffy-custom-portfolios&quot;&gt;Daffy custom portfolios&lt;/h3&gt;

&lt;p&gt;Daffy is the only DAF provider that lets you build a custom portfolio from a long list of eligible ETFs. Not &lt;em&gt;every&lt;/em&gt; ETF is eligible, but there are around 500 choices.&lt;/p&gt;

&lt;p&gt;Custom portfolios are only available for the $20/month tier, so this is not a cost-effective offering for smaller DAFs.&lt;/p&gt;

&lt;p&gt;There is no full list of the available ETFs, but I used the &lt;a href=&quot;https://www.daffy.org/portfolio-builder&quot;&gt;search function&lt;/a&gt;&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; to find that Daffy allows all of these ETFs:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;All the big Vanguard index funds (&lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vt&quot;&gt;VT&lt;/a&gt;, &lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vti&quot;&gt;VTI&lt;/a&gt;, &lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vxus&quot;&gt;VXUS&lt;/a&gt;, &lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vug&quot;&gt;VUG&lt;/a&gt;, &lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vtv&quot;&gt;VTV&lt;/a&gt;, &lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/bnd&quot;&gt;BND&lt;/a&gt;, &lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/bndx&quot;&gt;BNDX&lt;/a&gt;, &lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vb&quot;&gt;VB&lt;/a&gt;, etc.)&lt;/li&gt;
  &lt;li&gt;Alternative assets: emerging market stocks (&lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vwo&quot;&gt;VWO&lt;/a&gt;), high-yield bonds (&lt;a href=&quot;https://www.ishares.com/us/products/291299/&quot;&gt;USHY&lt;/a&gt;), commodities (&lt;a href=&quot;https://www.ishares.com/us/products/270319/ishares-commodity-etf&quot;&gt;COMT&lt;/a&gt;), gold (&lt;a href=&quot;https://etfdb.com/etf/AAAU/&quot;&gt;AAAU&lt;/a&gt;), bitcoin (&lt;a href=&quot;https://etfdb.com/etf/FBTC/&quot;&gt;FBTC&lt;/a&gt;), ethereum (&lt;a href=&quot;https://etfdb.com/etf/FETH/&quot;&gt;FETH&lt;/a&gt;), REITs (&lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vnq&quot;&gt;VNQ&lt;/a&gt;/&lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vnqi&quot;&gt;VNQI&lt;/a&gt;)&lt;/li&gt;
  &lt;li&gt;Sector ETFs (technology, healthcare, consumer staples, etc.)&lt;/li&gt;
  &lt;li&gt;Factor funds from Vanguard, iShares, and Schwab; including value funds (&lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vfva&quot;&gt;VFVA&lt;/a&gt;, &lt;a href=&quot;https://www.ishares.com/us/products/275382/ishares-msci-international-developed-value-factor-etf&quot;&gt;IVLU&lt;/a&gt;), momentum funds (&lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vfmo&quot;&gt;VFMO&lt;/a&gt;, &lt;a href=&quot;https://www.ishares.com/us/products/271538/ishares-msci-international-developed-momentum-factor-etf&quot;&gt;IMTM&lt;/a&gt;), and multi-factor funds (&lt;a href=&quot;https://investor.vanguard.com/investment-products/etfs/profile/vfmf&quot;&gt;VFMF&lt;/a&gt;, &lt;a href=&quot;https://www.ishares.com/us/products/272823/ishares-msci-international-small-cap-multi-factor-etf&quot;&gt;ISCF&lt;/a&gt;, &lt;a href=&quot;https://www.ishares.com/us/products/272820/ishares-msci-emerging-multi-factor-etf&quot;&gt;EMGF&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(The ETFs I listed are just examples. Daffy has multiple options for most categories.)&lt;/p&gt;

&lt;p&gt;Unfortunately, Daffy does not include some of my favorite ETF providers: &lt;a href=&quot;https://alphaarchitect.com/&quot;&gt;AlphaArchitect&lt;/a&gt;, &lt;a href=&quot;https://www.avantisinvestors.com/&quot;&gt;Avantis&lt;/a&gt;, &lt;a href=&quot;https://cambriafunds.com/&quot;&gt;Cambria&lt;/a&gt;, and &lt;a href=&quot;https://www.returnstackedetfs.com/&quot;&gt;Return Stacked&lt;/a&gt;. It also doesn’t include any managed futures ETFs.&lt;/p&gt;

&lt;h3 id=&quot;advisor-managed-accounts&quot;&gt;Advisor-managed accounts&lt;/h3&gt;

&lt;p&gt;Schwab, Charityvest, and Greater Horizons allow donors to appoint an investment advisor who can invest in things other than the pre-selected funds. Their program guidelines (links: &lt;a href=&quot;https://www.greaterhorizons.org/documents/Financial-Advisor-Program-Guidelines.pdf&quot;&gt;Greater Horizons&lt;/a&gt;, &lt;a href=&quot;https://www.schwabcharitable.org/public/file/P-8085389&quot;&gt;Schwab&lt;/a&gt; (p. 11–14)) dictate what investments are allowed. Charityvest’s advisor-managed accounts must hold a minimum of $1 million.&lt;/p&gt;

&lt;p&gt;Vanguard and Daffy allow advisor-managed accounts, but advisors don’t get to invest in anything beyond what individuals get.&lt;/p&gt;

&lt;p&gt;Some examples of restrictions that apply to advisor-managed accounts at Schwab/Charityvest/Greater Horizons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;“Risky” investments are not allowed, including margin, short sales, options (except covered calls/puts), futures, and swaps, unless held within a mutual fund or ETF structure. (&lt;a href=&quot;https://mdickens.me/2025/02/04/return_stacked_funds/&quot;&gt;Return stacked funds&lt;/a&gt; should be allowed, and I confirmed that at least Greater Horizons definitely allows them.)&lt;/li&gt;
  &lt;li&gt;The advisor may not charge higher fees to the DAF than to the donor’s personal accounts.&lt;/li&gt;
  &lt;li&gt;The donor’s family members may not serve as investment advisors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Schwab has some additional restrictions that Greater Horizons does not:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The account must trade using a Schwab brokerage account.&lt;/li&gt;
  &lt;li&gt;The advisor must establish a benchmark and then track that benchmark reasonably closely. If the account deviates substantially from the benchmark, the advisor is accountable to Schwab’s investment committee.&lt;/li&gt;
  &lt;li&gt;The account must meet certain asset allocation requirements. For example, it cannot allocate more than 25% to any one security, more than 25% to emerging market equities, or more than 50% “to publicly traded funds that pursue alternative or non-diversified investment strategies, e.g., commodities or cryptocurrencies”.&lt;/li&gt;
  &lt;li&gt;Advisors may not invest in funds that they own or manage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(Charityvest allegedly also has some restrictions, but they’re not publicly listed.)&lt;/p&gt;

&lt;p&gt;Greater Horizons provides the most flexibility to investment advisors. If I wanted to do anything unconventional, I would use Greater Horizons. (In fact, I do use Greater Horizons for exactly that reason.)&lt;/p&gt;

&lt;h3 id=&quot;contributing-complex-assets&quot;&gt;Contributing complex assets&lt;/h3&gt;

&lt;p&gt;Four of the five providers—&lt;a href=&quot;https://www.greaterhorizons.org/greater-giving/give-complex-assets/&quot;&gt;Greater Horizons&lt;/a&gt;, &lt;a href=&quot;https://www.schwabcharitable.org/non-cash-assets/donate-your-investments&quot;&gt;Schwab&lt;/a&gt;, &lt;a href=&quot;https://guide.charityvest.org/for-donors/contributions-to-fund/complex-assets&quot;&gt;Charityvest&lt;/a&gt;, and &lt;a href=&quot;https://www.vanguardcharitable.org/giving-with-vc/how-it-works/contributions/complex-assets&quot;&gt;Vanguard&lt;/a&gt;—can accept donations of complex assets, including cryptocurrency, private equity, real estate, and more.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; &lt;a href=&quot;https://intercom.help/daffy/en/articles/8143913-getting-started-how-do-i-make-a-contribution&quot;&gt;Daffy&lt;/a&gt; can accept cash, stocks/ETFs/mutual funds, and cryptocurrency, but not other types of assets.&lt;/p&gt;

&lt;p&gt;These four providers claim to accept pretty much any type of asset that can legally be donated. But if you want to donate a large position in a complex asset, you should contact your DAF provider of choice to confirm they can receive it.&lt;/p&gt;

&lt;p&gt;These providers all manage donations of complex assets through the same third party (&lt;a href=&quot;https://charitablesolutionsllc.com/non-cash-donations/&quot;&gt;Charitable Solutions LLC&lt;/a&gt;).&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;(Amusingly, AEF, NPTrust, and Schwab claim they’re “uniquely flexible” when handling complex assets. If they’re all uniquely flexible, I guess that means none of them is?)&lt;/p&gt;

&lt;h3 id=&quot;user-experience&quot;&gt;User experience&lt;/h3&gt;

&lt;p&gt;Most donors probably only care about the basic features of a DAF—they don’t need fancy investment options or the ability to donate complex assets. So the main deciding factors for most people are (1) fees and (2) user experience.&lt;/p&gt;

&lt;p&gt;Of the five providers I’m focusing on, Greater Horizons easily has the worst sign-up process. You have to request that a customer service representative reach out to you. Then they send you a PDF form to fill out and the representative manually creates the account for you. In contrast, the other providers all let you sign up online and you can fill out your personal information on the website.&lt;/p&gt;

&lt;p&gt;In the process of writing this article, I created accounts at Schwab, Vanguard, Daffy, and Charityvest and went through a few common use cases. I found them all easy to use. Vanguard was slightly more complicated than the others.&lt;/p&gt;

&lt;p&gt;Another important aspect of user experience is the quality of customer service. Good customer service matters, but it’s difficult to assess—if you have a good/bad customer service experience, that might have more to do with the specific person you talked to than the quality of service in general. That said, I communicated through email and over the phone with all of the providers to get answers to my questions, and had decent experiences with all of them. I don’t know if my experience with Charityvest would generalize because the CEO specifically reached out to me to ask me to review it (see &lt;a href=&quot;#disclaimers&quot;&gt;disclaimer&lt;/a&gt;). Daffy also reached out to me, but then didn’t do a great job of answering my questions. I have heard anecdotes from a personal friend&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; and from an independent financial advisor; they report that Daffy customer service was helpful and responsive when dealing with issues.&lt;/p&gt;

&lt;p&gt;I’ve heard that Schwab has particularly good customer service. I’ve heard Vanguard the brokerage has worse customer service, but that Vanguard Charitable has a separate customer service department and it’s better than Vanguard the brokerage.&lt;/p&gt;

&lt;p&gt;A side note: You might think it’s a good idea to evaluate user experience by looking at reviews on a site like Trustpilot or Better Business Bureau. I would avoid looking at those sites because they are protection rackets&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;—they offer to remove bad reviews in exchange for money. So the companies with good reviews are just the ones who pay to get the bad ones deleted.&lt;/p&gt;

&lt;p&gt;I can’t give objective metrics for user experience, but my ranking based on my personal experience is Charityvest &amp;gt; Schwab &amp;gt; Daffy &amp;gt; Vanguard &amp;gt; Greater Horizons.&lt;/p&gt;

&lt;h3 id=&quot;how-risky-are-new-daf-providers&quot;&gt;How risky are new DAF providers?&lt;/h3&gt;

&lt;p&gt;Daffy and Charityvest only started a few years ago. If they can’t earn enough money to sustain themselves, they might have to dissolve customers’ accounts and use their money to pay their bills (which they have a legal right to do, as established by &lt;a href=&quot;https://caselaw.findlaw.com/court/us-4th-circuit/1673902.html&quot;&gt;National Heritage Foundation v. Highbourne Foundation&lt;/a&gt;). You might not want to use Daffy or Charityvest if you’re concerned about the risk.&lt;/p&gt;

&lt;p&gt;I believe that:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;There is a non-trivial probability that Daffy/Charityvest will close down and you’ll have to transfer your money to another DAF provider. (I have no reason to believe they’re riskier than the average startup, but most startups go out of business.)&lt;/li&gt;
  &lt;li&gt;If they do close down, they will almost certainly let you transfer your money to another DAF provider—it’s unlikely that you will lose your money permanently.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s why I don’t believe you are likely to lose your money:&lt;/p&gt;

&lt;p&gt;Charityvest and Daffy both use a financial structure that is (as far as I know) novel for DAF providers. They were founded concurrently with sister corporations (Vennfi, Inc. for Charityvest and Aside, Inc. for Daffy). Vennfi/Aside receive funding from VC firms and Charityvest/Daffy pay the corporations to develop their technology. This lets the corporations run at a loss using VC funding until the DAF providers grow large enough to make the corporations profitable.&lt;/p&gt;

&lt;p&gt;This structure works in donors’ favor. If the for-profit corporation declares bankruptcy, VC funders can only make bankruptcy claims against the corporation, not against the DAF provider, so donors’ money is safe. And according to their 2022 public filings (viewable through the &lt;a href=&quot;https://csapp.fdacs.gov/CSPublicApp/CheckACharity/CheckACharity.aspx&quot;&gt;Florida Check-A-Charity website&lt;/a&gt;), Charityvest/Daffy have very little debt (or at least they did as of 2022).&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h1 id=&quot;the-best-daf-providers&quot;&gt;The best DAF provider(s)&lt;/h1&gt;

&lt;p&gt;This flowchart shows the best DAF provider depending on your circumstances.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/assets/images/DAF-flowchart-v4.png&quot;&gt;&lt;img src=&quot;/assets/images/DAF-flowchart-v4.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A text description of this flowchart:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If you want an advisor-managed account, use Greater Horizons.&lt;/li&gt;
  &lt;li&gt;If you want to set your own investment allocation from a list of index funds:
    &lt;ul&gt;
      &lt;li&gt;If you feel comfortable giving your money to a startup with an unproven track record, use Charityvest.&lt;/li&gt;
      &lt;li&gt;Otherwise, use Vanguard (if your DAF will hold at least $25,000) or Schwab (otherwise).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;If you are happy to use the DAF provider’s default investment allocation:
    &lt;ul&gt;
      &lt;li&gt;If you feel comfortable giving your money to a startup with an unproven track record, use Daffy (if your DAF will hold at least $40,000) or Charityvest (otherwise).&lt;/li&gt;
      &lt;li&gt;Otherwise, use Vanguard (if your DAF will hold at least $25,000) or Schwab (otherwise).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some other factors you might care about, that I didn’t put on the flowchart because it would be too big:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If you only plan on keeping money in your DAF for a short time, use Schwab or Charityvest because they let you keep an empty account without charging you (the others charge a flat fee).&lt;/li&gt;
  &lt;li&gt;If you want to invest in cryptocurrency (without using an advisor-managed account), use Daffy.&lt;/li&gt;
  &lt;li&gt;If you want to donate complex assets (such as real estate or artwork), &lt;em&gt;don’t&lt;/em&gt; use Daffy, but any of the other providers should be able to accommodate you.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This table illustrates my evaluation of the providers along five dimensions, ranked as bronze, silver, or gold.&lt;/p&gt;

&lt;table&gt;
  &lt;th style=&quot;width: 16%;&quot;&gt;&lt;/th&gt;&lt;th style=&quot;width: 16%;&quot;&gt;Fees&lt;/th&gt;&lt;th style=&quot;width: 16%;&quot;&gt;Investment Variety&lt;/th&gt;&lt;th style=&quot;width: 16%;&quot;&gt;User Experience&lt;/th&gt;&lt;th style=&quot;width: 16%;&quot;&gt;Advanced Features&lt;/th&gt;&lt;th&gt;Riskiness&lt;/th&gt;
  &lt;tr&gt;
    &lt;td&gt;Charityvest&lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: silver;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: silver;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: chocolate;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Daffy&lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: chocolate;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: chocolate;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: chocolate;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Greater Horizons&lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: chocolate;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: chocolate;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: silver;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Schwab&lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: chocolate;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: silver;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: silver;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Vanguard&lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: silver;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: silver;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
    &lt;td&gt;
        &lt;div style=&quot;height: 40px; width: 40px; border-radius: 50%; display: inline-block; background-color: gold;&quot;&gt;&lt;/div&gt;
    &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;

&lt;p&gt;So far, I have only looked at Charityvest, Daffy, Greater Horizons, Schwab, and Vanguard. That’s not because I believe these are better across the board than the other providers I didn’t discuss. For example, I think Fidelity has a better UI than Vanguard; and NPTrust offers more flexibility than Schwab for advisor-managed accounts. Rather, I chose these five DAF providers because each one is the best at something, while the other providers are not the best at anything (even though they are good at some things).&lt;/p&gt;

&lt;h2 id=&quot;more-on-fidelity-aef-nptrust-and-t-rowe-price&quot;&gt;More on Fidelity, AEF, NPTrust, and T. Rowe Price&lt;/h2&gt;

&lt;p&gt;Even though I didn’t compare them in detail, here are my impressions of the other four DAF providers, ranked from best to worst.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Fidelity Charitable&lt;/strong&gt; is as good as Schwab for most purposes. I like it better than T. Rowe Price, AEF, or NPTrust. I like Fidelity a little bit less than Schwab because it provides fewer investment options and the UI is slightly harder to use&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;, but Fidelity still seems like a fine choice as long as you’re careful to avoid their high-fee investment options.
    &lt;ul&gt;
      &lt;li&gt;Bonus fact: If you have at least $5 million, Fidelity Charitable lets you manage your own investments (without appointing an advisor). I don’t believe any of the other DAF providers let you do that.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;NPTrust&lt;/strong&gt; and &lt;strong&gt;AEF&lt;/strong&gt; are maybe passable but they have some serious issues:
    &lt;ul&gt;
      &lt;li&gt;They’re more opaque than the other DAF providers, and I had a bad experience talking to them.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
      &lt;li&gt;They make it more difficult to set up an account.&lt;/li&gt;
      &lt;li&gt;For advisor-managed accounts, they impose more restrictions on the investment advisor than Greater Horizons does.&lt;/li&gt;
      &lt;li&gt;They charge somewhat higher administrative fees than Fidelity/Schwab/Vanguard.&lt;/li&gt;
      &lt;li&gt;NPTrust’s pre-selected investment funds are too expensive. NPTrust should only be used with an advisor-managed account. AEF does not have pre-selected funds—you must appoint an investment advisor.&lt;/li&gt;
      &lt;li&gt;AEF’s website is so buggy that it’s unusable on Firefox. Even ignoring the bugs, AEF has a much worse UI than Fidelity/Schwab/Vanguard.&lt;/li&gt;
      &lt;li&gt;A former AEF user reached out to me to give some more information about AEF:
        &lt;ul&gt;
          &lt;li&gt;The donor portal has limited functionality, for example there is no way to get a statement of transactions.&lt;/li&gt;
          &lt;li&gt;AEF charges a 1% fee if you withdraw or grant &amp;gt;80% of your funds within six months of opening the DAF.&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;T. Rowe Price&lt;/strong&gt; Charitable is too expensive to be worth using. If you have a DAF at T. Rowe Price, you should consider switching to a new provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;appendix-table-of-administrative-fees&quot;&gt;Appendix: Table of administrative fees&lt;/h1&gt;

&lt;p&gt;Fees can be found at these links:
&lt;a href=&quot;https://www.aefonline.org/fee-schedule&quot;&gt;AEF&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;, &lt;a href=&quot;https://guide.charityvest.org/for-donors/fees-and-expenses&quot;&gt;Charityvest&lt;/a&gt;, &lt;a href=&quot;https://www.fidelitycharitable.org/giving-account/what-it-costs.html&quot;&gt;Fidelity&lt;/a&gt;, &lt;a href=&quot;https://www.nptrust.org/wp-content/uploads/2023/09/A-Guide-to-your-Donor-Advised-Fund-NPT.pdf&quot;&gt;NPTrust&lt;/a&gt;, &lt;a href=&quot;https://www.schwabcharitable.org/features/fees-and-minimums&quot;&gt;Schwab&lt;/a&gt;, &lt;a href=&quot;https://www.trowepricecharitable.org/how-it-works/program-fees.html&quot;&gt;T. Rowe Price&lt;/a&gt;, &lt;a href=&quot;https://www.vanguardcharitable.org/giving-with-vc/fees-and-minimums&quot;&gt;Vanguard&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Greater Horizons does not publish its fees online. I learned what it charges by speaking to a representative. I don’t like it when companies don’t publish their fees, but I will respect Greater Horizons’ preferences by not disclosing them. If you want to know the specifics, &lt;a href=&quot;https://www.greaterhorizons.org/contact-us/&quot;&gt;contact them directly&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Minimum fees:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;AEF&lt;/th&gt;
      &lt;th&gt;Charityvest&lt;/th&gt;
      &lt;th&gt;Daffy&lt;/th&gt;
      &lt;th&gt;Fidelity&lt;/th&gt;
      &lt;th&gt;NPTrust&lt;/th&gt;
      &lt;th&gt;Schwab&lt;/th&gt;
      &lt;th&gt;T. Rowe Price&lt;/th&gt;
      &lt;th&gt;Vanguard&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Minimum Fee&lt;/td&gt;
      &lt;td&gt;$500&lt;/td&gt;
      &lt;td&gt;$48&lt;/td&gt;
      &lt;td&gt;$36&lt;/td&gt;
      &lt;td&gt;$100&lt;/td&gt;
      &lt;td&gt;None&lt;/td&gt;
      &lt;td&gt;None&lt;/td&gt;
      &lt;td&gt;None&lt;/td&gt;
      &lt;td&gt;$250&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Percentage-based fees:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Account Value&lt;/th&gt;
      &lt;th&gt;AEF&lt;/th&gt;
      &lt;th&gt;Charityvest&lt;/th&gt;
      &lt;th&gt;Fidelity&lt;/th&gt;
      &lt;th&gt;NPTrust&lt;/th&gt;
      &lt;th&gt;Schwab&lt;/th&gt;
      &lt;th&gt;T. Rowe Price&lt;/th&gt;
      &lt;th&gt;Vanguard&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;First $250K&lt;/td&gt;
      &lt;td&gt;0.70%&lt;/td&gt;
      &lt;td&gt;0.60%&lt;/td&gt;
      &lt;td&gt;0.60%&lt;/td&gt;
      &lt;td&gt;0.85%&lt;/td&gt;
      &lt;td&gt;0.60%&lt;/td&gt;
      &lt;td&gt;0.50%&lt;/td&gt;
      &lt;td&gt;0.60%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Next $250K&lt;/td&gt;
      &lt;td&gt;0.70%&lt;/td&gt;
      &lt;td&gt;0.60%&lt;/td&gt;
      &lt;td&gt;0.60%&lt;/td&gt;
      &lt;td&gt;0.70%&lt;/td&gt;
      &lt;td&gt;0.60%&lt;/td&gt;
      &lt;td&gt;0.50%&lt;/td&gt;
      &lt;td&gt;0.60%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Next $500K&lt;/td&gt;
      &lt;td&gt;0.35%&lt;/td&gt;
      &lt;td&gt;0.30%&lt;/td&gt;
      &lt;td&gt;0.30%&lt;/td&gt;
      &lt;td&gt;0.60%&lt;/td&gt;
      &lt;td&gt;0.30%&lt;/td&gt;
      &lt;td&gt;0.39%&lt;/td&gt;
      &lt;td&gt;0.30%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Next $1.5M&lt;/td&gt;
      &lt;td&gt;0.25%&lt;/td&gt;
      &lt;td&gt;0.20%&lt;/td&gt;
      &lt;td&gt;0.20%&lt;/td&gt;
      &lt;td&gt;0.45%&lt;/td&gt;
      &lt;td&gt;0.20%&lt;/td&gt;
      &lt;td&gt;0.18%&lt;/td&gt;
      &lt;td&gt;0.13%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Next $2.5M&lt;/td&gt;
      &lt;td&gt;0.15%&lt;/td&gt;
      &lt;td&gt;0.15%&lt;/td&gt;
      &lt;td&gt;0.15%&lt;/td&gt;
      &lt;td&gt;0.25%&lt;/td&gt;
      &lt;td&gt;0.15%&lt;/td&gt;
      &lt;td&gt;0.12%&lt;/td&gt;
      &lt;td&gt;0.13%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Next $5M&lt;/td&gt;
      &lt;td&gt;0.15%&lt;/td&gt;
      &lt;td&gt;0.10%&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;0.15%&lt;/td&gt;
      &lt;td&gt;0.13%&lt;/td&gt;
      &lt;td&gt;0.10%&lt;/td&gt;
      &lt;td&gt;0.13%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Next $5M&lt;/td&gt;
      &lt;td&gt;0.15%&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;0.10%&lt;/td&gt;
      &lt;td&gt;0.12%&lt;/td&gt;
      &lt;td&gt;0.10%&lt;/td&gt;
      &lt;td&gt;0.13%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Next $15M&lt;/td&gt;
      &lt;td&gt;0.10%&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;0.10%&lt;/td&gt;
      &lt;td&gt;0.09%&lt;/td&gt;
      &lt;td&gt;0.10%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Over $30M&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt;0.05%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Caveats:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Charityvest has no minimum fee on cash-only accounts.&lt;/li&gt;
  &lt;li&gt;Daffy charges $36 per year, $60 per year, or $240 per year, depending on your account type. It does not charge any percentage-based fees.&lt;/li&gt;
  &lt;li&gt;Fidelity uses a flat fee schedule on accounts with over $5 million.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some observations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Daffy is the cheapest if you have more than $40,000.&lt;/li&gt;
  &lt;li&gt;Other than Daffy, Charityvest is cheapest or tied for cheapest above $6,000, and Schwab is cheapest below $6,000.&lt;/li&gt;
  &lt;li&gt;T. Rowe Price has the 3rd lowest administrative fees, but its investment options charge high expense ratios. It only offers two reasonably-priced funds (an S&amp;amp;P 500 index fund and a “balanced” index fund). T. Rowe Price only has the 3rd lowest all-in fee if you invest in the S&amp;amp;P 500 and nothing else.&lt;/li&gt;
  &lt;li&gt;Even though Schwab has lower fees than Vanguard at one tier, there is no account value at which Vanguard has a higher total fee.&lt;/li&gt;
  &lt;li&gt;Even though NPTrust has lower fees than Schwab at one tier, there is no account value at which Schwab has a higher total fee.&lt;/li&gt;
  &lt;li&gt;Fidelity’s flat fee structure above $5 million means that near the bottom of a tier (e.g., $11 million), it’s cheaper than Schwab or Vanguard, and near the top of a tier (e.g., $19 million), it’s more expensive.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;disclaimers&quot;&gt;Disclaimers&lt;/h1&gt;

&lt;p&gt;The original version of this article did not include Charityvest or Daffy. I updated the article when representatives from Charityvest and Daffy reached out to ask me to include them. Everything I wrote about them is my own opinion.&lt;/p&gt;

&lt;p&gt;I learned about Charityvest in May 2021, when the CEO, Stephen Kump, emailed me to ask me to review it prior to its public launch. He gave me access to the beta (which I used to check out Charityvest) and added $20 to my account (which I did not use, to avoid any conflict of interest). I did not receive any other compensation for writing about Charityvest.&lt;/p&gt;

&lt;p&gt;In early 2022, a representative from Daffy reached out to me, and I proceeded to forget about their message for a year. I rediscovered it in late 2023, when a couple of other people asked me for my opinion on Daffy and I decided to update this article to include it. I did not receive any incentive from Daffy.&lt;/p&gt;

&lt;h1 id=&quot;changelog&quot;&gt;Changelog&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;2022-03-10: Add a Charityvest review and add it to the recommendation flowchart.&lt;/li&gt;
  &lt;li&gt;2023-01-06: Update fee minimums.&lt;/li&gt;
  &lt;li&gt;2024-04-24: Significant revisions:
    &lt;ul&gt;
      &lt;li&gt;Add changelog.&lt;/li&gt;
      &lt;li&gt;Add a Daffy review.&lt;/li&gt;
      &lt;li&gt;Add a new section on default investment options.&lt;/li&gt;
      &lt;li&gt;Correct an error: I previously wrote that Vanguard’s account minimum is $0, but it is $25,000.&lt;/li&gt;
      &lt;li&gt;Update information on T. Rowe Price: I originally wrote that it only offered one reasonably-priced fund, but now it offers two.&lt;/li&gt;
      &lt;li&gt;Substantially change recommendations based on a number of factors:
        &lt;ul&gt;
          &lt;li&gt;Add consideration to a provider’s default investment options. Previously, I hadn’t considered them at all, but I expect many (most?) DAF users will use the defaults, so that was an oversight.&lt;/li&gt;
          &lt;li&gt;Downgrade my recommendation of Schwab. Previously, I had Schwab as my top recommendation for most people, but their default funds are too expensive.&lt;/li&gt;
          &lt;li&gt;Add Daffy as a possible recommendation and rearrange the flowchart accordingly.&lt;/li&gt;
          &lt;li&gt;Remove the “best DAF provider for most people” recommendation (previously I recommended Schwab). Every DAF provider has some hiccup that a significant number of people won’t like, so I don’t think I can reasonably give a general recommendation. (Schwab’s default funds are too expensive; Vanguard is too expensive for small donors; Charityvest/Daffy are new orgs that might go out of business.)&lt;/li&gt;
          &lt;li&gt;Generally rewrite to increase clarity.&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;2025-01-31: Add some more information about AEF.&lt;/li&gt;
  &lt;li&gt;2025-02-05: Update to reflect Charityvest’s recently raised fees. Previously Charityvest was cheaper than Fidelity/Schwab at most account values, and now it’s a three-way tie up to $5 million.&lt;/li&gt;
  &lt;li&gt;2025-05-09:
    &lt;ul&gt;
      &lt;li&gt;Daffy now allows you to choose your own ETFs. Update to explain this and incorporate it into the flowchart.&lt;/li&gt;
      &lt;li&gt;On the previous update, I forgot to account for Charityvest’s raised fee when determining which DAF provider is cheapest. Update to fix this.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I don’t know how DAFs work outside the United States. If you do have DAFs in your country, you could use this post as a guide for what sorts of things to look for when choosing a DAF provider. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I can’t provide a link for Greater Horizons because its fees are not publicly available. But I will say that they’re broadly similar to Vanguard and Schwab. &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At $40,000, Charityvest/Schwab/Vanguard’s 0.60% fee costs $240 per year, which exactly equals Daffy’s fee.&lt;/p&gt;

      &lt;p&gt;At $6,000, Charityvest is (marginally) more expensive than Daffy’s basic account. But Schwab’s 0.6% fee costs $36/year, the same as Daffy. &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At $16,667, Schwab’s 0.6% fee equals Charityvest’s $100/year minimum. &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Vanguard requires an initial contribution of $25,000, but you do not need to maintain a balance of $25,000. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;An advisor-managed account will probably still have limits on how much cryptocurrency you can hold, something like 50% of your portfolio. Daffy lets you invest up to 100% of your portfolio in crypto. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Schwab actually got sued for its high fees (see &lt;a href=&quot;https://caselaw.findlaw.com/court/us-9th-circuit/1912100.html&quot;&gt;Philip Pinkert v. Schwab Charitable Fund&lt;/a&gt;), but the court ruled in Schwab’s favor on the basis that Schwab legally owns donated funds so they have no fiduciary duty toward donors. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Note: You have to be logged in to view this link. You can create an account without paying a membership fee. &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Vanguard’s website does not state that they can accept cryptocurrency, but I confirmed with a representative that they take donations of cryptocurrency if the value of the contribution is at least $50,000. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;AEF, Fidelity, and NPTrust also use Charitable Solutions LLC. It seems that Charitable Solutions LLC has something of a monopoly on managing donations of complex assets. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;h/t Barak Gila for talking to me about his experiences with Daffy. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I have strong private evidence for this claim. The public evidence for this claim is weak because review site protection rackets, much like the mafia, know how to avoid leaving any incriminating evidence. But you can find many anecdotes, such as &lt;a href=&quot;https://dontrust-pilot.com/&quot;&gt;here&lt;/a&gt;, &lt;a href=&quot;https://medium.com/@ryanbadger/how-trustpilot-extorts-businesses-91ca4b6a6f62&quot;&gt;here&lt;/a&gt;, and &lt;a href=&quot;https://krystal.io/blog/post/trustpilot-pretend-they-re-superman-actually-homelander&quot;&gt;here&lt;/a&gt; (see also &lt;a href=&quot;https://www.linkedin.com/posts/simonblackler_entrepreneurship-technology-sales-activity-7143186098553253888-3JyG/&quot;&gt;the comment section on LinkedIn&lt;/a&gt;) from business-owners who claim Trustpilot extorted them, and &lt;a href=&quot;https://www.thisismoney.co.uk/money/news/article-7549909/Can-trust-Trustpilots-reviews-firms-dont-pay-it.html&quot;&gt;here&lt;/a&gt; from an alleged former employee of Trustpilot. I didn’t look for evidence on Better Business Bureau, but in my experience it’s more widely known that BBB is a protection racket, so evidence shouldn’t be hard to find. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;They both had less than $20,000 of debt against tens of millions of dollars of assets. The debt was entirely accounts payable, with no other sources of debt. I had to look up what “accounts payable” means but if I understand correctly, that means they were charged a fee (presumably by the partner corporation) and they haven’t paid the fee yet. &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Fidelity’s and Schwab’s UIs are nearly identical. The only real difference I noticed is that Schwab does a better job of conglomerating your accounts. If you have both a brokerage account and a DAF at Schwab, you can easily switch between them from the landing page. If you have both types of accounts at Fidelity, you can navigate to your DAF from the landing page, but it sends you to a separate URL and sometimes requires you to log in again. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;My incredible customer service experience with NPTrust:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;I send them an email to ask what types of investments they allow.&lt;/li&gt;
        &lt;li&gt;A representative replies and says we need to set up a phone call.&lt;/li&gt;
        &lt;li&gt;On the phone call, I repeat my question. They then email me a PDF that contains their investment requirements.&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;I’m tempted to send them one of those “this meeting could have been an email” &lt;a href=&quot;https://mbprints.co.uk/product/this-meeting-could-have-been-an-email-mug/&quot;&gt;coffee mugs&lt;/a&gt;. Or, better yet, don’t make me email you, just list your offerings on your website. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;AEF does not provide a fee schedule, only a fee calculator. I used the calculator to reverse-engineer the fee schedule. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For the mathematically inclined, this function converts Fidelity’s flat fee into an equivalent tiered fee on amounts over $5 million, given two variables &lt;code&gt;AUM&lt;/code&gt; and &lt;code&gt;flat_fee&lt;/code&gt;:&lt;/p&gt;

      &lt;pre&gt;&lt;code&gt;tiered_fee = ((AUM * flat_fee) - $11,250)
             / (AUM - $5,000,000)
&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>More Evidence on Concentrated Stock Selection</title>
				<pubDate>Thu, 01 Apr 2021 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2021/04/01/more_evidence_concentrated_stock_selection/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/04/01/more_evidence_concentrated_stock_selection/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;In &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;Can Investors Simulate Leverage via Concentrated Stock Selection?&lt;/a&gt;, I used the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt; to show that concentrated value or momentum factor portfolios historically had both higher returns and higher risk than diversified value or momentum portfolios. Thanks to a &lt;a href=&quot;https://www.dropbox.com/sh/ep40dynf2w3lck4/AACYXMDxbGOv21lVK7OC4fq7a&quot;&gt;newly released dataset&lt;/a&gt;, we can replicate this finding across 137 factors.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;Jensen, Kelly &amp;amp; Pedersen (2021), &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3774514&quot;&gt;Is There a Replication Crisis in Finance?&lt;/a&gt;, attempted to replicate 153 factors that supposedly predict stock performance. More importantly for our purposes, the authors publicly released their &lt;a href=&quot;https://github.com/bkelly-lab/GlobalFactor&quot;&gt;source code&lt;/a&gt; and &lt;a href=&quot;https://www.dropbox.com/sh/ep40dynf2w3lck4/AACYXMDxbGOv21lVK7OC4fq7a&quot;&gt;data&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;Previously&lt;/a&gt;, using the Ken French Data Library, I found that equal-weighted (EW) factor portfolios performed better than value-weighted (VW) portfolios. This performance could not be explained by factor exposure—equal-weighted portfolios had “alpha” on top of the value-weighted factor. (I use scare quotes because it’s not true alpha. It’s still a form of factor exposure, but it’s factor exposure that a value-weighted factor construction fails to capture.) Can we replicate this finding using the Jensen, Kelly &amp;amp; Pedersen dataset?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: This essay assumes the reader has some familiarity with &lt;a href=&quot;https://www.investopedia.com/terms/c/capm.asp&quot;&gt;CAPM&lt;/a&gt; and factor investing. My &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;previous essay&lt;/a&gt; provides a gentler introduction to the relevant concepts.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I filtered the dataset down to just those factors that had data for US stocks starting in 1960. The full dataset goes back to 1926, but many factors are impossible to calculate that far back because the necessary data doesn’t exist. I started in 1960 because we have enough data by that point to calculate the returns of almost all the factors (137 out of 153).&lt;/p&gt;

&lt;p&gt;The hypothesis is that EW factors provide a stronger signal. If a particular VW factor has a positive return, then its EW equivalent should have a higher return. And if a VW factor has a &lt;em&gt;negative&lt;/em&gt; return, then the EW version should have a &lt;em&gt;lower&lt;/em&gt; return.&lt;/p&gt;

&lt;p&gt;I tested this hypothesis in three ways:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;For each of the 137 factors, does EW have a larger absolute value of return than VW, and with the correct sign?&lt;/li&gt;
  &lt;li&gt;For each of the 137 factors, does EW have a larger absolute value of &lt;em&gt;risk-adjusted&lt;/em&gt; return?&lt;/li&gt;
  &lt;li&gt;Does EW provide a stronger signal if we restrict to just those factors where VW has a risk-adjusted return of at least 0.3?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My reasoning for the third test is that factors with small risk-adjusted returns might not have any predictive power—they’re more likely to be noise. If we restrict our dataset to just the value-weighted factors with good performance (taking 0.3 as an arbitrary threshold for “good”&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;), then we should expect their equal-weighted equivalents to have even better performance.&lt;/p&gt;

&lt;p&gt;I calculated the returns (or risk-adjusted returns) for every factor and took the difference between EW and VW. Then I calculated the following summary statistics:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;number of eligible factors&lt;/li&gt;
  &lt;li&gt;number of factors for which the EW formulation provided a stronger signal than the VW version&lt;/li&gt;
  &lt;li&gt;mean difference between EW and VW factors&lt;/li&gt;
  &lt;li&gt;standard error of differences&lt;/li&gt;
  &lt;li&gt;t-statistic for the null hypothesis that the mean equals zero&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Absolute Return&lt;/th&gt;
      &lt;th&gt;Risk-Adjusted Return&lt;/th&gt;
      &lt;th&gt;Risk-Adjusted (&amp;gt; 0.3)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;n&lt;/td&gt;
      &lt;td&gt;137&lt;/td&gt;
      &lt;td&gt;137&lt;/td&gt;
      &lt;td&gt;56&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;n(EW&amp;gt;VW)&lt;/td&gt;
      &lt;td&gt;110&lt;/td&gt;
      &lt;td&gt;112&lt;/td&gt;
      &lt;td&gt;54&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;mean&lt;/td&gt;
      &lt;td&gt;1.67%&lt;/td&gt;
      &lt;td&gt;0.245&lt;/td&gt;
      &lt;td&gt;0.493&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;stderr&lt;/td&gt;
      &lt;td&gt;0.21%&lt;/td&gt;
      &lt;td&gt;0.032&lt;/td&gt;
      &lt;td&gt;0.041&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;t-stat&lt;/td&gt;
      &lt;td&gt;7.6&lt;/td&gt;
      &lt;td&gt;8.0&lt;/td&gt;
      &lt;td&gt;12.2&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;In all three cases, EW provided a stronger signal for a large majority of factors, and all three differences had extremely high t-statistics. The t-statistic was highest when we restricted the sample to only factors with a risk-adjusted return of 0.3 or higher.&lt;/p&gt;

&lt;p&gt;This supports my previous finding that, before fees and transaction costs, EW factor portfolios outperform VW portfolios.&lt;/p&gt;

&lt;p&gt;Two important caveats:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The Jensen, Kelly &amp;amp; Pedersen dataset only includes long/short factors. It’s possible that most of the outperformance of equal-weighted factors happens on the short side. This was not the case on the Ken French data, so it stands to reason that it’s not the case for most of these new factors, either. But we can’t say for sure without getting more data.&lt;/li&gt;
  &lt;li&gt;This analysis does not account for fees and transaction costs. Equal-weighted portfolios incur higher costs, and they’re prohibitively expensive for sufficiently large investors (with perhaps $1 billion or more). Small investors can invest in equal-weighted portfolios without paying much in transaction costs. I discussed this in more detail &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/#fees-transaction-costs-and-taxes&quot;&gt;previously&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This should not be taken an investment advice. Any given portfolio results are hypothetical and do not represent returns achieved by an actual investor.&lt;/em&gt;&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I chose 0.3 as the threshold because it’s a bit less than the risk-adjusted return of the US stock market minus T-bills. My reasoning is that if a factor performed close to as well as the stock market itself, then it’s a pretty good factor (at least ex post). &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>The True Cost of Leveraged ETFs</title>
				<pubDate>Thu, 04 Mar 2021 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2021/03/04/true_cost_of_leveraged_etfs/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/03/04/true_cost_of_leveraged_etfs/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Updated 2025-01-18 to fix some flaws in the calculations and add new data for 2021–2024. Tables have been updated to include the new results. For more, see &lt;a href=&quot;#2025-update-how-have-things-changed&quot;&gt;2025 Update: How have things changed?&lt;/a&gt;&lt;/em&gt; and &lt;em&gt;&lt;a href=&quot;#appendix-a-changes-to-calculation-methodology&quot;&gt;Appendix A: Changes to calculation methodology&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Under some circumstances, &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/&quot;&gt;altruists might prefer to leverage their investments&lt;/a&gt;. The easiest way to get leverage is to buy leveraged ETFs. But leveraged ETFs charge high fees and incur other hidden costs. These costs vary substantially across different funds and across time, but on average, leveraged ETFs have historically had annual excess costs of about 1.5% per 100% leverage, or around 1% on top of the expense ratio.&lt;/p&gt;

&lt;p&gt;Given reasonable expectations for future returns, leveraged ETFs most likely have substantially higher arithmetic mean returns than their un-leveraged benchmarks. They also appear to have higher &lt;a href=&quot;https://en.wikipedia.org/wiki/Geometric_mean&quot;&gt;geometric mean&lt;/a&gt; returns than their benchmarks, but only by a small margin. More pessimistic estimates would find that adding leverage decreases geometric return.&lt;/p&gt;

&lt;p&gt;Note: Many investors can get leverage more cheaply via &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/#Possible_leverage_methods&quot;&gt;other methods&lt;/a&gt;, such as margin loans or futures. Even if leveraged ETFs appear better than un-leveraged investments, other forms of leverage might be better still.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This should not be taken as investment advice.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#measuring-the-cost-of-leveraged-etfs&quot; id=&quot;markdown-toc-measuring-the-cost-of-leveraged-etfs&quot;&gt;Measuring the cost of leveraged ETFs&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#are-leveraged-etfs-worth-it&quot; id=&quot;markdown-toc-are-leveraged-etfs-worth-it&quot;&gt;Are leveraged ETFs worth it?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#2025-update-how-have-things-changed&quot; id=&quot;markdown-toc-2025-update-how-have-things-changed&quot;&gt;2025 Update: How have things changed?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#source-code&quot; id=&quot;markdown-toc-source-code&quot;&gt;Source code&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-a-changes-to-calculation-methodology&quot; id=&quot;markdown-toc-appendix-a-changes-to-calculation-methodology&quot;&gt;Appendix A: Changes to calculation methodology&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-b-original-tables-for-posterity&quot; id=&quot;markdown-toc-appendix-b-original-tables-for-posterity&quot;&gt;Appendix B: Original tables, for posterity&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;measuring-the-cost-of-leveraged-etfs&quot;&gt;Measuring the cost of leveraged ETFs&lt;/h2&gt;

&lt;p&gt;To calculate costs, I created a list of leveraged ETFs and looked up their historical returns from the start of 2016 to the end of 2020. I paired each leveraged ETF with an equivalent un-leveraged benchmark ETF. I calculated what return you would have earned if you had taken that ETF and applied the appropriate amount of leverage (either 2x or 3x, depending on how much the corresponding leveraged ETF uses), paying only the risk-free rate.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; Then I subtracted the actual return of the leveraged ETF to find the excess cost.&lt;/p&gt;

&lt;p&gt;My analysis included these ETFs:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;ETF&lt;/th&gt;
      &lt;th&gt;Leverage&lt;/th&gt;
      &lt;th&gt;Index&lt;/th&gt;
      &lt;th&gt;Index ETF&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;SPXL&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;S&amp;amp;P 500&lt;/td&gt;
      &lt;td&gt;SPY&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UPRO&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;S&amp;amp;P 500&lt;/td&gt;
      &lt;td&gt;SPY&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;SSO&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;S&amp;amp;P 500&lt;/td&gt;
      &lt;td&gt;SPY&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UMDD&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;S&amp;amp;P 400 (mid cap)&lt;/td&gt;
      &lt;td&gt;IJH&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;URTY&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;Russell 2000 (small cap)&lt;/td&gt;
      &lt;td&gt;IWM&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EFO&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;EAFE&lt;/td&gt;
      &lt;td&gt;EFA&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EURL&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;Europe&lt;/td&gt;
      &lt;td&gt;VGK&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EZJ&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;Japan&lt;/td&gt;
      &lt;td&gt;EWJ&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EET&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;emerging markets&lt;/td&gt;
      &lt;td&gt;EEM&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EDC&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;emerging markets&lt;/td&gt;
      &lt;td&gt;EEM&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TQQQ&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;NASDAQ&lt;/td&gt;
      &lt;td&gt;QQQ&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TMF&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;long-term US Treasury bonds&lt;/td&gt;
      &lt;td&gt;TLT&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The following table shows the total excess cost and after-fee cost for various leveraged ETFs (2016–2024). Excess cost is shown per 100% leverage (so the excess on a 3x ETF is divided by two). After-fee cost tells us the excess cost minus the expense ratio.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; &lt;code&gt;r&lt;/code&gt; gives the correlation between the leveraged ETF and the simulated leveraged benchmark ETF.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;ETF&lt;/th&gt;
      &lt;th&gt;Excess Cost&lt;/th&gt;
      &lt;th&gt;After Fee&lt;/th&gt;
      &lt;th&gt;r&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;SPXL&lt;/td&gt;
      &lt;td&gt;1.14%&lt;/td&gt;
      &lt;td&gt;0.82%&lt;/td&gt;
      &lt;td&gt;0.999&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UPRO&lt;/td&gt;
      &lt;td&gt;1.17%&lt;/td&gt;
      &lt;td&gt;0.85%&lt;/td&gt;
      &lt;td&gt;0.999&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;SSO&lt;/td&gt;
      &lt;td&gt;1.51%&lt;/td&gt;
      &lt;td&gt;0.80%&lt;/td&gt;
      &lt;td&gt;0.999&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UMDD&lt;/td&gt;
      &lt;td&gt;0.94%&lt;/td&gt;
      &lt;td&gt;0.54%&lt;/td&gt;
      &lt;td&gt;0.991&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;URTY&lt;/td&gt;
      &lt;td&gt;1.22%&lt;/td&gt;
      &lt;td&gt;1.03%&lt;/td&gt;
      &lt;td&gt;0.999&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EFO&lt;/td&gt;
      &lt;td&gt;2.31%&lt;/td&gt;
      &lt;td&gt;1.48%&lt;/td&gt;
      &lt;td&gt;0.947&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EURL&lt;/td&gt;
      &lt;td&gt;1.37%&lt;/td&gt;
      &lt;td&gt;0.98%&lt;/td&gt;
      &lt;td&gt;0.995&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EZJ&lt;/td&gt;
      &lt;td&gt;2.32%&lt;/td&gt;
      &lt;td&gt;2.20%&lt;/td&gt;
      &lt;td&gt;0.936&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EET&lt;/td&gt;
      &lt;td&gt;1.83%&lt;/td&gt;
      &lt;td&gt;1.04%&lt;/td&gt;
      &lt;td&gt;0.987&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EDC&lt;/td&gt;
      &lt;td&gt;1.17%&lt;/td&gt;
      &lt;td&gt;0.72%&lt;/td&gt;
      &lt;td&gt;0.998&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TQQQ&lt;/td&gt;
      &lt;td&gt;1.09%&lt;/td&gt;
      &lt;td&gt;0.97%&lt;/td&gt;
      &lt;td&gt;0.999&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TMF&lt;/td&gt;
      &lt;td&gt;0.77%&lt;/td&gt;
      &lt;td&gt;0.48%&lt;/td&gt;
      &lt;td&gt;0.997&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Average&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;1.40%&lt;/td&gt;
      &lt;td&gt;0.99%&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The leveraged ETFs generally had an excess cost of 1.43%, or 1.01% in excess of the ETF expense ratios. Almost all fund pairings had high correlations, which tells us that the leveraged ETFs do a good job of tracking their benchmarks (with the possible exceptions of EFO and EZJ).&lt;/p&gt;

&lt;p&gt;Excess costs were not stable—they varied substantially from year to year. The next table shows excess cost for each individual year:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;ETF&lt;/th&gt;
      &lt;th&gt;2016&lt;/th&gt;
      &lt;th&gt;2017&lt;/th&gt;
      &lt;th&gt;2018&lt;/th&gt;
      &lt;th&gt;2019&lt;/th&gt;
      &lt;th&gt;2020&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;SPXL&lt;/td&gt;
      &lt;td&gt;1.39&lt;/td&gt;
      &lt;td&gt;1.63&lt;/td&gt;
      &lt;td&gt;0.85&lt;/td&gt;
      &lt;td&gt;1.77&lt;/td&gt;
      &lt;td&gt;2.74&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UPRO&lt;/td&gt;
      &lt;td&gt;0.99&lt;/td&gt;
      &lt;td&gt;1.52&lt;/td&gt;
      &lt;td&gt;0.86&lt;/td&gt;
      &lt;td&gt;2.22&lt;/td&gt;
      &lt;td&gt;2.48&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;SSO&lt;/td&gt;
      &lt;td&gt;1.44&lt;/td&gt;
      &lt;td&gt;1.74&lt;/td&gt;
      &lt;td&gt;1.39&lt;/td&gt;
      &lt;td&gt;2.57&lt;/td&gt;
      &lt;td&gt;2.63&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UMDD&lt;/td&gt;
      &lt;td&gt;1.40&lt;/td&gt;
      &lt;td&gt;1.94&lt;/td&gt;
      &lt;td&gt;0.56&lt;/td&gt;
      &lt;td&gt;1.43&lt;/td&gt;
      &lt;td&gt;0.13&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;URTY&lt;/td&gt;
      &lt;td&gt;1.14&lt;/td&gt;
      &lt;td&gt;1.00&lt;/td&gt;
      &lt;td&gt;0.46&lt;/td&gt;
      &lt;td&gt;1.50&lt;/td&gt;
      &lt;td&gt;3.71&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EFO&lt;/td&gt;
      &lt;td&gt;0.84&lt;/td&gt;
      &lt;td&gt;2.91&lt;/td&gt;
      &lt;td&gt;2.05&lt;/td&gt;
      &lt;td&gt;4.24&lt;/td&gt;
      &lt;td&gt;1.52&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EURL&lt;/td&gt;
      &lt;td&gt;1.43&lt;/td&gt;
      &lt;td&gt;2.61&lt;/td&gt;
      &lt;td&gt;0.72&lt;/td&gt;
      &lt;td&gt;3.33&lt;/td&gt;
      &lt;td&gt;1.44&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EZJ&lt;/td&gt;
      &lt;td&gt;2.88&lt;/td&gt;
      &lt;td&gt;1.63&lt;/td&gt;
      &lt;td&gt;1.84&lt;/td&gt;
      &lt;td&gt;3.49&lt;/td&gt;
      &lt;td&gt;1.78&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EET&lt;/td&gt;
      &lt;td&gt;2.23&lt;/td&gt;
      &lt;td&gt;1.96&lt;/td&gt;
      &lt;td&gt;1.37&lt;/td&gt;
      &lt;td&gt;1.85&lt;/td&gt;
      &lt;td&gt;1.35&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EDC&lt;/td&gt;
      &lt;td&gt;1.41&lt;/td&gt;
      &lt;td&gt;1.95&lt;/td&gt;
      &lt;td&gt;0.49&lt;/td&gt;
      &lt;td&gt;1.91&lt;/td&gt;
      &lt;td&gt;1.12&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TQQQ&lt;/td&gt;
      &lt;td&gt;1.05&lt;/td&gt;
      &lt;td&gt;1.75&lt;/td&gt;
      &lt;td&gt;0.64&lt;/td&gt;
      &lt;td&gt;2.30&lt;/td&gt;
      &lt;td&gt;3.51&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TMF&lt;/td&gt;
      &lt;td&gt;0.71&lt;/td&gt;
      &lt;td&gt;0.56&lt;/td&gt;
      &lt;td&gt;0.12&lt;/td&gt;
      &lt;td&gt;0.94&lt;/td&gt;
      &lt;td&gt;1.71&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Average&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;1.41&lt;/td&gt;
      &lt;td&gt;1.77&lt;/td&gt;
      &lt;td&gt;0.95&lt;/td&gt;
      &lt;td&gt;2.30&lt;/td&gt;
      &lt;td&gt;2.01&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;New data through 2024:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;ETF&lt;/th&gt;
      &lt;th&gt;2021&lt;/th&gt;
      &lt;th&gt;2022&lt;/th&gt;
      &lt;th&gt;2023&lt;/th&gt;
      &lt;th&gt;2024&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;SPXL&lt;/td&gt;
      &lt;td&gt;1.94&lt;/td&gt;
      &lt;td&gt;0.29&lt;/td&gt;
      &lt;td&gt;1.61&lt;/td&gt;
      &lt;td&gt;2.13&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UPRO&lt;/td&gt;
      &lt;td&gt;2.06&lt;/td&gt;
      &lt;td&gt;0.44&lt;/td&gt;
      &lt;td&gt;1.97&lt;/td&gt;
      &lt;td&gt;2.14&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;SSO&lt;/td&gt;
      &lt;td&gt;2.37&lt;/td&gt;
      &lt;td&gt;0.86&lt;/td&gt;
      &lt;td&gt;2.09&lt;/td&gt;
      &lt;td&gt;2.46&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UMDD&lt;/td&gt;
      &lt;td&gt;1.66&lt;/td&gt;
      &lt;td&gt;0.37&lt;/td&gt;
      &lt;td&gt;1.68&lt;/td&gt;
      &lt;td&gt;1.94&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;URTY&lt;/td&gt;
      &lt;td&gt;0.70&lt;/td&gt;
      &lt;td&gt;0.40&lt;/td&gt;
      &lt;td&gt;1.71&lt;/td&gt;
      &lt;td&gt;1.24&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EFO&lt;/td&gt;
      &lt;td&gt;2.71&lt;/td&gt;
      &lt;td&gt;2.28&lt;/td&gt;
      &lt;td&gt;3.83&lt;/td&gt;
      &lt;td&gt;2.28&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EURL&lt;/td&gt;
      &lt;td&gt;2.01&lt;/td&gt;
      &lt;td&gt;0.32&lt;/td&gt;
      &lt;td&gt;1.61&lt;/td&gt;
      &lt;td&gt;1.20&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EZJ&lt;/td&gt;
      &lt;td&gt;2.49&lt;/td&gt;
      &lt;td&gt;2.08&lt;/td&gt;
      &lt;td&gt;2.87&lt;/td&gt;
      &lt;td&gt;2.75&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EET&lt;/td&gt;
      &lt;td&gt;1.08&lt;/td&gt;
      &lt;td&gt;1.63&lt;/td&gt;
      &lt;td&gt;3.31&lt;/td&gt;
      &lt;td&gt;2.10&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EDC&lt;/td&gt;
      &lt;td&gt;1.15&lt;/td&gt;
      &lt;td&gt;0.44&lt;/td&gt;
      &lt;td&gt;1.32&lt;/td&gt;
      &lt;td&gt;1.63&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TQQQ&lt;/td&gt;
      &lt;td&gt;0.66&lt;/td&gt;
      &lt;td&gt;0.16&lt;/td&gt;
      &lt;td&gt;0.70&lt;/td&gt;
      &lt;td&gt;0.99&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Average&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;1.74&lt;/td&gt;
      &lt;td&gt;0.80&lt;/td&gt;
      &lt;td&gt;2.12&lt;/td&gt;
      &lt;td&gt;1.92&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;For individual years, excess costs ranged from as low as 0.13% to as high as 4.24%.&lt;/p&gt;

&lt;p&gt;Some factors that might contribute to this year-to-year variance:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The ETFs’ counterparties charge different rates based on perceived risk.&lt;/li&gt;
  &lt;li&gt;Market liquidity varies over time.&lt;/li&gt;
  &lt;li&gt;The ETFs might experience some tracking error.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Excess cost does not include the risk-free rate, so the year-to-year variance has nothing to do with changes in the risk-free rate.&lt;/p&gt;

&lt;p&gt;Some people observe that 3x leveraged funds usually provide less than a 3x return due to volatility drag. To be clear, the excess cost I found is &lt;strong&gt;not&lt;/strong&gt; related to volatility drag. Volatility drag equals the difference between the &lt;a href=&quot;https://www.investopedia.com/ask/answers/06/geometricmean.asp&quot;&gt;arithmetic mean and geometric mean&lt;/a&gt; of an investment. If two investments A and B have the same arithmetic return but B has higher volatility, then B will have &lt;em&gt;lower&lt;/em&gt; geometric return.&lt;/p&gt;

&lt;p&gt;Specifically, the geometric mean of an investment equals \(\alpha - \sigma^2/2\), where \(\alpha\) is the arithmetic mean and \(\sigma\) is the standard deviation.&lt;/p&gt;

&lt;p&gt;The geometric mean of a 3x leveraged ETF is less than 3x the arithmetic mean. For example, if the benchmark has a geometric mean of 5% and an a standard deviation of 16%, then we calculate the geometric mean of the leveraged ETF as:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Arithmetic mean of the benchmark is \(5\% + 16\%^2/2 = 6.28\%\)&lt;/li&gt;
  &lt;li&gt;Arithmetic mean of the leveraged ETF is \(3 \times 6.28\% = 18.84\%\)&lt;/li&gt;
  &lt;li&gt;Standard deviation of the leveraged ETF is \(3 \times 16\% = 48\%\)&lt;/li&gt;
  &lt;li&gt;Geometric mean of the leveraged ETF is \(18.84\% - 48\%^2/2 = 7.32\%\)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this example, adding 3x leverage only increases the expected geometric return from 5% to 7.32%. If we assume a 3x leveraged ETF has a 2.5% excess cost, that reduces its expected geometric return to 4.82%, which is &lt;em&gt;lower&lt;/em&gt; than the return of an un-leveraged ETF.&lt;/p&gt;

&lt;h2 id=&quot;are-leveraged-etfs-worth-it&quot;&gt;Are leveraged ETFs worth it?&lt;/h2&gt;

&lt;p&gt;Mathematically speaking, an agent with a logarithmic utility function wants to maximize geometric mean, while a risk-neutral agent wants to maximize arithmetic mean. A logarithmic utility function is probably still too aggressive for self-interested investors, but it might be appropriate for altruists. Under some circumstances, altruists might even be close to risk-neutral.&lt;/p&gt;

&lt;p&gt;For risk-neutral investors, leveraged ETFs pretty clearly have higher expected utility than normal ETFs. (Although they might prefer to get leverage some other way if they can, since leveraged ETFs are one of the most expensive forms of leverage.)&lt;/p&gt;

&lt;p&gt;Are leveraged ETFs worth it for investors with logarithmic utility? That is, can we expect leveraged ETFs to have a higher geometric return than ordinary ETFs?&lt;/p&gt;

&lt;p&gt;That depends on three things: the expected return of the benchmark, volatility, and excess cost.&lt;/p&gt;

&lt;p&gt;Let’s assume an excess cost of 2% and a risk-free rate of 0%. Long-run market volatility generally varies from 15% (for more stable markets like the S&amp;amp;P 500) to 22% (for more volatile markets like US small-caps or emerging markets). Then we can ask, what expected geometric return does an index need to have for a leveraged ETF to be worth it? Let’s assume we can use the optimal amount of leverage, not just 2x or 3x. (We can combine a 3x ETF with an un-leveraged ETF to get an intermediate amount of leverage.)&lt;/p&gt;

&lt;p&gt;At a 15% standard deviation, an investment needs an expected (geometric) return higher than 3.125% for leverage to be worth it. At a standard deviation of 22%, the investment must return more than 4.42%.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;If we assume a total cost of 2.5% (e.g., an excess cost of 2% plus a risk-free rate of 0.5%, or an excess cost of 2.5% when the risk-free rate is zero), then an investment with 15% standard deviation must return 3.625%, and one with 22% standard deviation must return 4.92%.&lt;/p&gt;

&lt;p&gt;What returns can we expect from various market indexes? We can’t know with high confidence, but we can try to estimate it. The following table gives return estimates for various equity regions according to &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation/&quot;&gt;Research Affiliates&lt;/a&gt; (RA) and &lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/2021-Capital-Markets-Assumptions-for-Major-Asset-Classes&quot;&gt;AQR&lt;/a&gt;, and standard deviation estimates from Research Affiliates.&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Index&lt;/th&gt;
      &lt;th&gt;Return (RA)&lt;/th&gt;
      &lt;th&gt;Return (AQR)&lt;/th&gt;
      &lt;th&gt;Stdev (RA)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US large&lt;/td&gt;
      &lt;td&gt;2.0%&lt;/td&gt;
      &lt;td&gt;5.9%&lt;/td&gt;
      &lt;td&gt;15.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US small&lt;/td&gt;
      &lt;td&gt;4.3%&lt;/td&gt;
      &lt;td&gt;N/A&lt;/td&gt;
      &lt;td&gt;20.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EAFE&lt;/td&gt;
      &lt;td&gt;6.3%&lt;/td&gt;
      &lt;td&gt;6.5%&lt;/td&gt;
      &lt;td&gt;17.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Europe&lt;/td&gt;
      &lt;td&gt;6.4%&lt;/td&gt;
      &lt;td&gt;6.3%&lt;/td&gt;
      &lt;td&gt;19.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Japan&lt;/td&gt;
      &lt;td&gt;5.8%&lt;/td&gt;
      &lt;td&gt;6.3%&lt;/td&gt;
      &lt;td&gt;17.6%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;emerging&lt;/td&gt;
      &lt;td&gt;7.6%&lt;/td&gt;
      &lt;td&gt;7.0%&lt;/td&gt;
      &lt;td&gt;21.7%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The next table gives the expected (geometric) return of an optimally-leveraged portfolio&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; minus the expected return of the index, constructed using a leveraged ETF with a 1.5% cost.&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Index&lt;/th&gt;
      &lt;th&gt;Improvement (RA)&lt;/th&gt;
      &lt;th&gt;Improvement (AQR)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US large&lt;/td&gt;
      &lt;td&gt;0.1%&lt;/td&gt;
      &lt;td&gt;2.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US small&lt;/td&gt;
      &lt;td&gt;0.1%&lt;/td&gt;
      &lt;td&gt;1.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EAFE&lt;/td&gt;
      &lt;td&gt;1.8%&lt;/td&gt;
      &lt;td&gt;2.1%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Europe&lt;/td&gt;
      &lt;td&gt;1.3%&lt;/td&gt;
      &lt;td&gt;1.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Japan&lt;/td&gt;
      &lt;td&gt;1.2%&lt;/td&gt;
      &lt;td&gt;1.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;emerging&lt;/td&gt;
      &lt;td&gt;1.5%&lt;/td&gt;
      &lt;td&gt;1.1%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Applying leverage at least marginally increases expected geometric return in every sample case. However, leverage provides only a modest improvement, with the best case being 2.2 percentage points, and the projected improvements in some cases being as low as 0.1 percentage points.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://advisors.vanguard.com/insights/article/marketperspectivesdecember2021&quot;&gt;Vanguard’s return expectations&lt;/a&gt; produce similar numbers (not shown).&lt;/p&gt;

&lt;p&gt;Of course, these results heavily depend on what inputs we use for expected return, volatility, and costs, and the true numbers might differ significantly. If costs turn out to be closer to 3%, or if true (ex-ante) expected returns are one percentage point lower, that would eliminate the benefit of using leverage.&lt;/p&gt;

&lt;p&gt;Or suppose you are trying to decide between two ways of investing:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Buy leveraged ETFs in a donor-advised fund (DAF).&lt;/li&gt;
  &lt;li&gt;Buy un-leveraged ETFs in a taxable account, and use margin to get leverage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Say your DAF has to pay a 0.3% fund fee plus a 1% fee for an investment manager, and margin costs 0.5% over the risk-free rate. That means the DAF has to pay an extra 2.3% cost above what the taxable account pays (DAF fees plus leveraged ETF cost minus margin cost), whereas the taxable account must pay taxes. 2.3% is a lot. It’s not obvious whether it outweighs the tax benefits of a DAF, but I personally would lean toward keeping my money in a taxable account.&lt;/p&gt;

&lt;h2 id=&quot;2025-update-how-have-things-changed&quot;&gt;2025 Update: How have things changed?&lt;/h2&gt;

&lt;p&gt;I ran the numbers again to cover the four years since this post was first published. In short:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;My previous methodology had some flaws, and overestimated the true average cost by about 0.5 percentage points. (See &lt;a href=&quot;#appendix-a-changes-to-calculation-methodology&quot;&gt;Appendix A&lt;/a&gt; for a full explanation.)&lt;/li&gt;
  &lt;li&gt;Since 2021, excess costs have continued to vary a lot from year to year.&lt;/li&gt;
  &lt;li&gt;Excess costs were just as high after 2020 as they were before 2020.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2016–2020 had an average annual cost of 1.77% (taking the arithmetic mean across all ETF-year pairs) and 2021–2024 had an average of 1.74%. The difference was highly insignificant (p = 0.87).&lt;/p&gt;

&lt;h2 id=&quot;source-code&quot;&gt;Source code&lt;/h2&gt;

&lt;p&gt;Source code for the 2025 revision of this article is available &lt;a href=&quot;https://github.com/michaeldickens/leveraged-etfs&quot;&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;appendix-a-changes-to-calculation-methodology&quot;&gt;Appendix A: Changes to calculation methodology&lt;/h1&gt;

&lt;p&gt;In the 2025 update to this post, I made several changes to my original methodology that changed the numbers.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Based on the script I originally used, the average excess cost of ETFs (2016–2020) was 2.09%.&lt;/li&gt;
  &lt;li&gt;I discovered that I had been calculating the risk-free rate incorrectly. I converted annualized T-bill yields into daily interest by raising to the power of 1/365. That’s not correct because there are not 365 trading days per year—there are approximately 252. Raising to the power of 1/252 instead reduced the average excess cost to 1.64%.&lt;/li&gt;
  &lt;li&gt;I completely re-wrote the calculations because I lost my original script (I later re-discovered it). The new script uses a more rigorous method for calculating the cost of leverage. Instead of assuming the interest paid is uniform across 252 trading days, it pays interest on every calendar day, using the last known rate on days when the market is closed. The new script also correctly handles days where the stock market is open but the Treasury is not (which happens occasionally). The new script produces an excess cost of 1.61%.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There is one additional issue that has no simple resolution: there are multiple ways to define “average excess cost”, and it’s not clear which one is best.&lt;/p&gt;

&lt;p&gt;We can calculate excess cost in at least three ways:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Take the total return of the simulated leveraged ETF; annualize it by raising to the power of the number of years in the sample; do the same for the actual leveraged ETF. Let the excess cost be the difference between those two numbers. (This is the method my original script used.)&lt;/li&gt;
  &lt;li&gt;For each individual year, calculate the difference in annual return between the simulated and actual ETFs. Let the excess cost be the simple average of each year’s costs. (That’s the method I used &lt;a href=&quot;#2025-update-how-have-things-changed&quot;&gt;above&lt;/a&gt; when I wrote that 2016–2020 had an average annual cost of 1.69%.)&lt;/li&gt;
  &lt;li&gt;Let the excess cost be the number &lt;code&gt;X&lt;/code&gt; such that, if you took the simulated ETF and deducted &lt;code&gt;X&lt;/code&gt; from the return every year, the resulting total return (across the full sample) would equal the total return of the actual ETF.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The third method seems most reasonable to me, so it’s what I used in my new script. I like the third method because if an ETF charged a management fee, the third method is how you would calculate the impact of the fee.&lt;/p&gt;

&lt;p&gt;While the first method found an average cost of 1.65%, the third method found a cost of 1.35%.&lt;/p&gt;

&lt;p&gt;The table &lt;a href=&quot;#measuring-the-cost-of-leveraged-etfs&quot;&gt;above&lt;/a&gt; reported an average of 1.43%, not 1.35%, because (1) the 2025 version includes a longer date range and (2) it includes two new leveraged ETFs.&lt;/p&gt;

&lt;p&gt;For the 2025 edition, I added these leveraged ETFs and their corresponding index ETFs:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Leveraged ETF&lt;/th&gt;
      &lt;th&gt;Leverage&lt;/th&gt;
      &lt;th&gt;Index&lt;/th&gt;
      &lt;th&gt;Index ETF&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;SSO&lt;/td&gt;
      &lt;td&gt;2&lt;/td&gt;
      &lt;td&gt;S&amp;amp;P 500&lt;/td&gt;
      &lt;td&gt;SPY&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TQQQ&lt;/td&gt;
      &lt;td&gt;3&lt;/td&gt;
      &lt;td&gt;NASDAQ&lt;/td&gt;
      &lt;td&gt;QQQ&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;em&gt;Updated again 2025-01-22. I realized there was a data entry error where for 2016–2017 I incorrectly used the 4-month Treasury yield as the risk-free rate rather than the 3-month Treasury yield. This only slightly changed the results.&lt;/em&gt;&lt;/p&gt;

&lt;h1 id=&quot;appendix-b-original-tables-for-posterity&quot;&gt;Appendix B: Original tables, for posterity&lt;/h1&gt;

&lt;p&gt;These are the tables I originally included in the post in 2021. I made some mistakes in calculating these so the figures are not correct.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;ETF&lt;/th&gt;
      &lt;th&gt;Excess Cost&lt;/th&gt;
      &lt;th&gt;After Fee&lt;/th&gt;
      &lt;th&gt;r&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;SPXL&lt;/td&gt;
      &lt;td&gt;2.23%&lt;/td&gt;
      &lt;td&gt;1.72%&lt;/td&gt;
      &lt;td&gt;0.995&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UPRO&lt;/td&gt;
      &lt;td&gt;2.12%&lt;/td&gt;
      &lt;td&gt;1.64%&lt;/td&gt;
      &lt;td&gt;0.995&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UMDD&lt;/td&gt;
      &lt;td&gt;1.31%&lt;/td&gt;
      &lt;td&gt;0.84%&lt;/td&gt;
      &lt;td&gt;0.995&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;URTY&lt;/td&gt;
      &lt;td&gt;2.08%&lt;/td&gt;
      &lt;td&gt;1.61%&lt;/td&gt;
      &lt;td&gt;0.993&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EFO&lt;/td&gt;
      &lt;td&gt;2.47%&lt;/td&gt;
      &lt;td&gt;1.14%&lt;/td&gt;
      &lt;td&gt;0.978&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EURL&lt;/td&gt;
      &lt;td&gt;1.97%&lt;/td&gt;
      &lt;td&gt;1.44%&lt;/td&gt;
      &lt;td&gt;0.984&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EZJ&lt;/td&gt;
      &lt;td&gt;2.79%&lt;/td&gt;
      &lt;td&gt;1.84%&lt;/td&gt;
      &lt;td&gt;0.981&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EET&lt;/td&gt;
      &lt;td&gt;2.18%&lt;/td&gt;
      &lt;td&gt;1.25%&lt;/td&gt;
      &lt;td&gt;0.994&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EDC&lt;/td&gt;
      &lt;td&gt;1.67%&lt;/td&gt;
      &lt;td&gt;1.20%&lt;/td&gt;
      &lt;td&gt;0.994&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;ETF&lt;/th&gt;
      &lt;th&gt;2016&lt;/th&gt;
      &lt;th&gt;2017&lt;/th&gt;
      &lt;th&gt;2018&lt;/th&gt;
      &lt;th&gt;2019&lt;/th&gt;
      &lt;th&gt;2020&lt;/th&gt;
      &lt;th&gt;Min&lt;/th&gt;
      &lt;th&gt;Max&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;SPXL&lt;/td&gt;
      &lt;td&gt;1.39&lt;/td&gt;
      &lt;td&gt;2.02&lt;/td&gt;
      &lt;td&gt;1.22&lt;/td&gt;
      &lt;td&gt;3.25&lt;/td&gt;
      &lt;td&gt;3.01&lt;/td&gt;
      &lt;td&gt;1.22&lt;/td&gt;
      &lt;td&gt;3.25&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EET&lt;/td&gt;
      &lt;td&gt;2.73&lt;/td&gt;
      &lt;td&gt;2.63&lt;/td&gt;
      &lt;td&gt;1.55&lt;/td&gt;
      &lt;td&gt;2.71&lt;/td&gt;
      &lt;td&gt;1.58&lt;/td&gt;
      &lt;td&gt;1.55&lt;/td&gt;
      &lt;td&gt;2.73&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UPRO&lt;/td&gt;
      &lt;td&gt;1.03&lt;/td&gt;
      &lt;td&gt;1.90&lt;/td&gt;
      &lt;td&gt;1.27&lt;/td&gt;
      &lt;td&gt;3.51&lt;/td&gt;
      &lt;td&gt;2.76&lt;/td&gt;
      &lt;td&gt;1.03&lt;/td&gt;
      &lt;td&gt;3.51&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;UMDD&lt;/td&gt;
      &lt;td&gt;1.58&lt;/td&gt;
      &lt;td&gt;0.85&lt;/td&gt;
      &lt;td&gt;1.04&lt;/td&gt;
      &lt;td&gt;2.03&lt;/td&gt;
      &lt;td&gt;0.06&lt;/td&gt;
      &lt;td&gt;0.06&lt;/td&gt;
      &lt;td&gt;2.03&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;URTY&lt;/td&gt;
      &lt;td&gt;1.05&lt;/td&gt;
      &lt;td&gt;1.59&lt;/td&gt;
      &lt;td&gt;0.87&lt;/td&gt;
      &lt;td&gt;2.48&lt;/td&gt;
      &lt;td&gt;3.88&lt;/td&gt;
      &lt;td&gt;0.87&lt;/td&gt;
      &lt;td&gt;3.88&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EURL&lt;/td&gt;
      &lt;td&gt;1.51&lt;/td&gt;
      &lt;td&gt;3.08&lt;/td&gt;
      &lt;td&gt;1.15&lt;/td&gt;
      &lt;td&gt;4.21&lt;/td&gt;
      &lt;td&gt;1.34&lt;/td&gt;
      &lt;td&gt;1.15&lt;/td&gt;
      &lt;td&gt;4.21&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EZJ&lt;/td&gt;
      &lt;td&gt;2.05&lt;/td&gt;
      &lt;td&gt;2.07&lt;/td&gt;
      &lt;td&gt;2.11&lt;/td&gt;
      &lt;td&gt;4.71&lt;/td&gt;
      &lt;td&gt;1.87&lt;/td&gt;
      &lt;td&gt;1.87&lt;/td&gt;
      &lt;td&gt;4.71&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EDC&lt;/td&gt;
      &lt;td&gt;1.45&lt;/td&gt;
      &lt;td&gt;2.55&lt;/td&gt;
      &lt;td&gt;0.73&lt;/td&gt;
      &lt;td&gt;2.89&lt;/td&gt;
      &lt;td&gt;1.30&lt;/td&gt;
      &lt;td&gt;0.73&lt;/td&gt;
      &lt;td&gt;2.89&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EFO&lt;/td&gt;
      &lt;td&gt;0.83&lt;/td&gt;
      &lt;td&gt;2.73&lt;/td&gt;
      &lt;td&gt;2.51&lt;/td&gt;
      &lt;td&gt;4.02&lt;/td&gt;
      &lt;td&gt;1.32&lt;/td&gt;
      &lt;td&gt;0.83&lt;/td&gt;
      &lt;td&gt;4.02&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Min&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;0.83&lt;/td&gt;
      &lt;td&gt;0.85&lt;/td&gt;
      &lt;td&gt;0.73&lt;/td&gt;
      &lt;td&gt;2.03&lt;/td&gt;
      &lt;td&gt;0.06&lt;/td&gt;
      &lt;td&gt;0.06&lt;/td&gt;
      &lt;td&gt;2.03&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Average&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;1.51&lt;/td&gt;
      &lt;td&gt;2.16&lt;/td&gt;
      &lt;td&gt;1.38&lt;/td&gt;
      &lt;td&gt;3.31&lt;/td&gt;
      &lt;td&gt;1.90&lt;/td&gt;
      &lt;td&gt;1.03&lt;/td&gt;
      &lt;td&gt;3.47&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;strong&gt;Max&lt;/strong&gt;&lt;/td&gt;
      &lt;td&gt;2.73&lt;/td&gt;
      &lt;td&gt;3.08&lt;/td&gt;
      &lt;td&gt;2.51&lt;/td&gt;
      &lt;td&gt;4.71&lt;/td&gt;
      &lt;td&gt;3.88&lt;/td&gt;
      &lt;td&gt;1.87&lt;/td&gt;
      &lt;td&gt;4.71&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Index&lt;/th&gt;
      &lt;th&gt;Improvement (RA)&lt;/th&gt;
      &lt;th&gt;Improvement (AQR)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US large&lt;/td&gt;
      &lt;td&gt;&amp;lt;0%&lt;/td&gt;
      &lt;td&gt;1.6%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US small&lt;/td&gt;
      &lt;td&gt;0.0%&lt;/td&gt;
      &lt;td&gt;N/A&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;EAFE&lt;/td&gt;
      &lt;td&gt;1.3%&lt;/td&gt;
      &lt;td&gt;1.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Europe&lt;/td&gt;
      &lt;td&gt;0.9%&lt;/td&gt;
      &lt;td&gt;0.8%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Japan&lt;/td&gt;
      &lt;td&gt;0.8%&lt;/td&gt;
      &lt;td&gt;1.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;emerging&lt;/td&gt;
      &lt;td&gt;1.1%&lt;/td&gt;
      &lt;td&gt;0.7%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I used the 3-month T-bill yield as the risk-free rate. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I looked at the ETFs’ current expense ratios (as of 2025-01-18) and assumed that they stayed the same over the whole sample period, which might not be true.&lt;/p&gt;

      &lt;p&gt;I did not use the actual expense ratios for EFA and EEM. Instead I used the expense ratios for VEA and VWO respectively. EFA/VEA and EEM/VWO cover the same markets but use different benchmarks so they are not perfectly correlated. Realistically, if you wanted to invest in EAFA or emerging markets, you’d buy VEA or VWO respectively, not EFA/EEM. But I used EFA/EEM as the benchmarks for EFO/EET/EDC because they’re more strongly correlated, which means the estimated excess costs have less variance. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In general, a leveraged ETF has the same geometric return \(\mu\) as the index when \(\mu = \sigma^2 / 2 + R + c\) for standard deviation \(\sigma\), risk-free rate \(R\), and excess cost \(c\).&lt;/p&gt;

      &lt;p&gt;Another way to put it: when \(\mu = \sigma^2 / 2 + R + c\), optimal leverage is exactly 1x. For any higher value of \(\mu\), adding leverage increases geometric return. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Research Affiliates estimates are as of 2021-01-31; AQR estimates are as of January 2021.&lt;/p&gt;

      &lt;p&gt;I did not update these numbers for the 2025 revision because the exact numbers don’t matter too much, they’re more meant as an illustration of how different return assumptions affect the viability of leveraged ETFs. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If geometric mean maximizing leverage is less than 3x (which it usually is), then we can construct an optimally-leveraged portfolio by holding both a 3x leveraged ETF and an un-leveraged ETF and rebalancing their weights daily. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Improvements can be calculated using the following Python code:&lt;/p&gt;

      &lt;pre&gt;&lt;code&gt;def return_improvement(mu, sigma, cost):
    leverage = (mu + sigma**2/2 - cost) / (sigma**2)
    return (
        leverage * (mu + sigma**2/2) - (leverage * sigma)**2/2
        - (leverage - 1) * cost
        - mu
    )
&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Investors Can Simulate Leverage via Concentrated Stock Selection</title>
				<pubDate>Mon, 08 Feb 2021 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2021/02/08/concentrated_stock_selection/</link>
				<guid isPermaLink="true">http://mdickens.me/2021/02/08/concentrated_stock_selection/</guid>
                <description>
                  
                  
                  
                  &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Last updated 2022-04-15; see &lt;a href=&quot;#errata&quot;&gt;Errata&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;https://mdickens.me/confidence_tags/&quot;&gt;Confidence&lt;/a&gt;: Highly likely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some altruists are much less risk-averse than ordinary investors, and may want to use &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;leverage&lt;/a&gt;. But foundations and donor-advised funds legally cannot access most forms of leverage. As an alternative approach, leverage-constrained investors could buy concentrated positions in small-cap value and momentum stocks. For example, instead of buying an ETF that holds the best half of the market as ranked by value or momentum, they could buy the top 10%.&lt;/p&gt;

&lt;p&gt;According to backtests, when portfolio concentration increases, both return and risk increase, and return increases more than risk (so that concentrated portfolios have higher risk-adjusted returns).&lt;/p&gt;

&lt;p&gt;Large investors cannot hold concentrated portfolios without moving the market, so they probably prefer to use leverage if they can. Small investors probably prefer to buy concentrated investments because they offer higher risk-adjusted returns than leveraged broad portfolios.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This should not be taken as investment advice. Any given portfolio results are hypothetical and do not represent returns achieved by an actual investor.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#motivation&quot; id=&quot;markdown-toc-motivation&quot;&gt;Motivation&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#known-approaches&quot; id=&quot;markdown-toc-known-approaches&quot;&gt;Known approaches&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#increasing-expected-return-via-concentration&quot; id=&quot;markdown-toc-increasing-expected-return-via-concentration&quot;&gt;Increasing expected return via concentration&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#fees-transaction-costs-and-taxes&quot; id=&quot;markdown-toc-fees-transaction-costs-and-taxes&quot;&gt;Fees, transaction costs, and taxes&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#how-does-concentrated-investing-compare-to-using-leverage&quot; id=&quot;markdown-toc-how-does-concentrated-investing-compare-to-using-leverage&quot;&gt;How does concentrated investing compare to using leverage?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#on-future-expectations&quot; id=&quot;markdown-toc-on-future-expectations&quot;&gt;On future expectations&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#finding-concentrated-etfs&quot; id=&quot;markdown-toc-finding-concentrated-etfs&quot;&gt;Finding concentrated ETFs&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#errata&quot; id=&quot;markdown-toc-errata&quot;&gt;Errata&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-a-significance-tests&quot; id=&quot;markdown-toc-appendix-a-significance-tests&quot;&gt;Appendix A: Significance tests&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-b-replication-on-international-equities&quot; id=&quot;markdown-toc-appendix-b-replication-on-international-equities&quot;&gt;Appendix B: Replication on international equities&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-c-factor-regression-on-selected-portfolios&quot; id=&quot;markdown-toc-appendix-c-factor-regression-on-selected-portfolios&quot;&gt;Appendix C: Factor regression on selected portfolios&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;motivation&quot;&gt;Motivation&lt;/h2&gt;

&lt;p&gt;Some investors cannot use leverage. For example, foundations, donor-advised funds, and IRAs are prohibited from using certain types of investments, including margin loans. Other investors have access to leverage, but have to pay high fees to get it. But sometimes, investors want to increase their return and risk. Can we do that without explicitly using leverage?&lt;/p&gt;

&lt;h2 id=&quot;known-approaches&quot;&gt;Known approaches&lt;/h2&gt;

&lt;p&gt;Some stocks are more volatile than others. One way to simulate leverage might be to buy stocks with high volatility. Unfortunately, this doesn’t work: high-vol stocks &lt;a href=&quot;https://en.wikipedia.org/wiki/Low-volatility_anomaly&quot;&gt;do not provide enough return&lt;/a&gt; to justify the higher risk.&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; This may happen because leverage-constrained investors already overpay for stocks with high volatility.&lt;/p&gt;

&lt;p&gt;As a different approach, we could tilt toward small-cap, value, and momentum stocks. These tilts all tend to increase both risk and return.&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; (Gordon Irlam has &lt;a href=&quot;https://www.gordoni.com/effective-altruism/investing.html&quot;&gt;proposed&lt;/a&gt; that altruists use small-cap value rather than leverage as a (sometimes) better method of increasing return.) Most funds that implement these tilts hold around half to a third of the total market, and end up with a little bit higher volatility and expected return than the market, but not much higher.&lt;/p&gt;

&lt;h2 id=&quot;increasing-expected-return-via-concentration&quot;&gt;Increasing expected return via concentration&lt;/h2&gt;

&lt;p&gt;Small-cap stocks, value stocks, and especially small-cap value stocks tend to earn higher return (and higher risk) than the broad market. Small-cap and value index funds typically hold about half the market. Can we enhance our return&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; by concentrating in a narrower corner of the market, such as 10% instead of 50%?&lt;/p&gt;

&lt;p&gt;Let’s use the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt; to look at portfolios formed on size, value, and momentum at various levels of concentration.&lt;/p&gt;

&lt;p&gt;Ignoring size for now, did more concentrated value and momentum portfolios historically perform better?&lt;/p&gt;

&lt;p&gt;The chart below shows historical returns for US stocks using three valuation metrics and one momentum metric:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;value: book to market (B/M)&lt;/li&gt;
  &lt;li&gt;value: earnings to price (E/P)&lt;/li&gt;
  &lt;li&gt;value: cash flow to price (CF/P)&lt;/li&gt;
  &lt;li&gt;momentum: 12-month return, excluding the most recent month (Mom)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It includes both equal-weighted (EW) returns (where each stock receives the same weight) and value-weighted (VW) returns (where stocks are weighted in proportion to market cap).&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/val-mom-ret-by-concentration.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;For each of the four metrics, return monotonically increased with concentration (with the sole exception of B/M value-weighted moving from top 20% to top 10%).&lt;/p&gt;

&lt;p&gt;Let’s narrow in on value stocks as measured by B/M. The next chart shows B/M returns over the total market as well as over small caps (defined as the 10% smallest stocks):&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/small-cap-val-ret-by-concentration.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;We see a similar pattern, where more concentration means higher return. Within small caps, this pattern has a steeper slope, and small-cap value stocks consistently outperformed all-cap value stocks except at the weakest level of concentration. Relatedly, equal-weighting outperformed value-weighting—equal-weighting gives relatively more weight to small caps, so this is a different form of small cap tilt.&lt;/p&gt;

&lt;p&gt;Most off-the-shelf value and momentum funds hold about half the market, value-weighted. If instead of buying one of those funds, we held only the top 10% and used equal weighting, historically we could have increased returns by a lot:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;50% VW&lt;/th&gt;
      &lt;th&gt;10% EW&lt;/th&gt;
      &lt;th&gt;Improvement&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;B/M&lt;/td&gt;
      &lt;td&gt;11.0%&lt;/td&gt;
      &lt;td&gt;18.7%&lt;/td&gt;
      &lt;td&gt;7.7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;E/P&lt;/td&gt;
      &lt;td&gt;13.8%&lt;/td&gt;
      &lt;td&gt;17.8%&lt;/td&gt;
      &lt;td&gt;4.0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;CF/P&lt;/td&gt;
      &lt;td&gt;12.6%&lt;/td&gt;
      &lt;td&gt;18.1%&lt;/td&gt;
      &lt;td&gt;5.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Mom&lt;/td&gt;
      &lt;td&gt;12.6%&lt;/td&gt;
      &lt;td&gt;19.2%&lt;/td&gt;
      &lt;td&gt;6.6%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;As return increased with concentration, volatility increased at a similar pace. Equal-weighted portfolios had higher risk-adjusted returns across the board than their value-weighted counterparts; but equal-weighted portfolios had similar risk-adjusted returns at all levels of concentration (including 50%, 40%, 30%, 20%, and 10%).&lt;/p&gt;

&lt;p&gt;These differences in returns across concentration, market cap, and weighting are highly statistically significant, as are the differences in volatility (see &lt;a href=&quot;#appendix-a-significance-tests&quot;&gt;Appendix A&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;The following table shows the risk-adjusted returns (&lt;a href=&quot;https://www.investopedia.com/terms/s/sharperatio.asp&quot;&gt;Sharpe ratios&lt;/a&gt;) for value and momentum portfolios:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;50% VW&lt;/th&gt;
      &lt;th&gt;10% EW&lt;/th&gt;
      &lt;th&gt;Improvement&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;B/M&lt;/td&gt;
      &lt;td&gt;0.45&lt;/td&gt;
      &lt;td&gt;0.57&lt;/td&gt;
      &lt;td&gt;0.12&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;E/P&lt;/td&gt;
      &lt;td&gt;0.71&lt;/td&gt;
      &lt;td&gt;0.74&lt;/td&gt;
      &lt;td&gt;0.03&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;CF/P&lt;/td&gt;
      &lt;td&gt;0.63&lt;/td&gt;
      &lt;td&gt;0.71&lt;/td&gt;
      &lt;td&gt;0.08&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Mom&lt;/td&gt;
      &lt;td&gt;0.57&lt;/td&gt;
      &lt;td&gt;0.73&lt;/td&gt;
      &lt;td&gt;0.16&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;We can get higher returns (accompanied by higher risk) by going even deeper into small-cap value and momentum. According to some backtests I ran on a different (non-public) data set, a portfolio of the top 30 value stocks (equal-weighted) would have historically returned about 30% per year gross of costs, with a standard deviation of about 45%. This is similar to the historical performance of a US total market index with 3:1 leverage. However, this portfolio would mainly have consisted of very small stocks (market cap $200 million or less), so trading costs could destroy a lot of the value of this strategy in practice.&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;fees-transaction-costs-and-taxes&quot;&gt;Fees, transaction costs, and taxes&lt;/h2&gt;

&lt;p&gt;So far, I have focused on gross returns. But real-world strategies have to pay various costs, including management fees, trading costs, and taxes. How much do these matter?&lt;/p&gt;

&lt;p&gt;I did not find much published research on real-world transaction costs and I only have limited personal experience trading individual stocks, so take the following with a grain of salt.&lt;/p&gt;

&lt;p&gt;Investors with less than $1 million probably shouldn’t worry about transaction costs. Even micro-cap stocks (with market caps between $50 million and $200 million) are liquid enough that small investors will probably end up paying 0.25% or less per trade. If you buy value stocks and rebalance once a year, transaction costs will probably be low enough not to matter. Momentum requires more frequent rebalancing, so even small investors might prefer to restrict themselves to some minimum market cap threshold (somewhere in the $200 to $500 million range) or take other measures to reduce costs.&lt;/p&gt;

&lt;p&gt;Larger investors must take care not to move the market when they buy and sell stocks. Investors with substantially more than $1 million might prefer not to trade micro-cap stocks. Investors with much more money (perhaps around $100 million) can’t use equal weighting because they will move the market too much if they try to put substantial positions into small-caps.&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;If you hire an investment manager to run a strategy like this, they will charge a fee—probably around 1% per year. This dampens the value of investing in a concentrated portfolio rather than simply buying an off-the-shelf mutual fund or ETF. Or you could manage your own portfolio, avoiding the fee but giving yourself more work to do. I used to manage my own concentrated portfolio of value stocks, and it took me two hours of work per year. A momentum strategy would probably require somewhere around 10–20 hours per year. But if you manage the portfolio yourself, you need to be confident that you won’t make trading mistakes.&lt;/p&gt;

&lt;p&gt;Value and momentum portfolios both have some turnover, and turnover means you pay taxes on gains. The more concentrated a strategy, the more turnover it has. This may be prohibitive for taxable investors, who will pay substantial taxes if they have to sell positions regularly.&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; Taxable investors could avoid turnover by &lt;a href=&quot;#finding-concentrated-etfs&quot;&gt;buying an ETF&lt;/a&gt; that uses an appropriately concentrated strategy. ETFs charge fees, and concentrated funds tend to have substantially higher fees than broad ETFs. But these fees are probably still lower than the cost of hiring an investment manager, and lower than the tax drag from trading individual stocks.&lt;/p&gt;

&lt;h2 id=&quot;how-does-concentrated-investing-compare-to-using-leverage&quot;&gt;How does concentrated investing compare to using leverage?&lt;/h2&gt;

&lt;p&gt;Adding leverage to a portfolio increases both return and volatility while keeping risk-adjusted return the same (before costs). In the backtests we looked at, more concentrated portfolios had both higher returns and higher risk-adjusted returns.&lt;/p&gt;

&lt;p&gt;Two other important considerations:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Small investors have no difficulty buying an equal-weighted portfolio of small-cap stocks. Large investors cannot buy small-caps without moving the market.&lt;/li&gt;
  &lt;li&gt;Small investors face relatively high costs of leverage. Large investors can take advantage of their size to get leverage more cheaply.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These considerations suggest that small investors should invest in concentrated portfolios while large investors should use leverage.&lt;/p&gt;

&lt;p&gt;Under some conditions, small investors may still prefer to hold leveraged broad portfolios if they can, mainly for two reasons:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Manually managing a basket of stocks is both less tax-efficient and more time-consuming than buying and holding an ETF (or group of ETFs) and applying leverage.&lt;/li&gt;
  &lt;li&gt;The analysis in this essay only considered stock portfolios. Adding bonds, real assets, or other &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/&quot;&gt;factors&lt;/a&gt; might improve an investor’s overall risk-adjusted return.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;on-future-expectations&quot;&gt;On future expectations&lt;/h2&gt;

&lt;p&gt;So far, we have looked at simulated historical performance of different strategies. That doesn’t tell us how these strategies will perform in the future.&lt;/p&gt;

&lt;p&gt;We care about two questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Will value/momentum investing continue to outperform the market?&lt;/li&gt;
  &lt;li&gt;Will &lt;em&gt;concentrated&lt;/em&gt; value/momentum investing continue to beat the market, and continue to beat diversified value/momentum investing?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first question has been addressed elsewhere in detail. For a deep dive into arguments on why value investing might not work anymore and why they’re probably wrong, see Israel, Laursen &amp;amp; Richardson (2020), &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Is-Systematic-Value-Investing-Dead&quot;&gt;Is (Systematic) Value Investing Dead?&lt;/a&gt; and Cliff Asness’ (less rigorous, easier to read) &lt;a href=&quot;https://www.aqr.com/Insights/Perspectives/Is-Systematic-Value-Investing-Dead&quot;&gt;article&lt;/a&gt; of the same name. The future of momentum investing is much less clear. Israel &amp;amp; Moskowitz (2013)’s &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2089466&quot;&gt;The Role of Shorting, Firm Size, and Time on Market Anomalies&lt;/a&gt; provides at least some reason to expect momentum’s outperformance to persist, but no strong evidence.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;For the second question, if we expect value and maybe momentum to persist, then we should probably expect concentrated portfolios’ even better performance to persist for the same reasons. In fact, the argument that concentrated value/momentum will persist appears even stronger. If strategies like value and momentum become more popular among large sophisticated investors, then they will tend to perform worse. But large investors cannot buy equal-weighted small cap portfolios. And small investors who do use value/momentum typically invest via broad mutual funds or ETFs, not concentrated baskets of stocks.&lt;/p&gt;

&lt;p&gt;As a counterpoint, just because a strategy has higher risk doesn’t mean you’re compensated for that risk. For example, a basket of 30 randomly-chosen stocks is riskier than an index fund, but doesn’t have higher expected return.&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; Investing in concentrated value/momentum portfolios only makes sense if they’re able to outperform randomly-chosen concentrated portfolios.&lt;/p&gt;

&lt;h2 id=&quot;finding-concentrated-etfs&quot;&gt;Finding concentrated ETFs&lt;/h2&gt;

&lt;p&gt;The &lt;a href=&quot;https://www.etf.com/etfanalytics/etf-finder&quot;&gt;etf.com screener&lt;/a&gt; can be used to identify concentrated ETFs. For example, the following table lists every ETF filtered by “Strategy: Value” and “Weighting Scheme: Equal”. This list does not include every concentrated value ETF because (a) some funds are concentrated but not equal-weighted, and (b) some funds don’t have sufficiently detailed metadata on etf.com to show up in these screens. I attempted to estimate concentration by comparing the number of stocks in each fund to the underlying index, but the numbers might not be fully accurate.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;ETF&lt;/th&gt;
      &lt;th&gt;Region&lt;/th&gt;
      &lt;th&gt;Market Cap&lt;/th&gt;
      &lt;th&gt;Concentration&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://www.etf.com/EEMD&quot;&gt;EEMD&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;emerging markets&lt;/td&gt;
      &lt;td&gt;mid+large&lt;/td&gt;
      &lt;td&gt;10%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://www.etf.com/IVAL&quot;&gt;IVAL&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;developed ex-US&lt;/td&gt;
      &lt;td&gt;mid+large&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://www.etf.com/QVAL&quot;&gt;QVAL&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;US&lt;/td&gt;
      &lt;td&gt;mid+large&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://www.etf.com/SPDV&quot;&gt;SPDV&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;US&lt;/td&gt;
      &lt;td&gt;mid+large&lt;/td&gt;
      &lt;td&gt;10%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://www.etf.com/SVAL&quot;&gt;SVAL&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;US&lt;/td&gt;
      &lt;td&gt;small&lt;/td&gt;
      &lt;td&gt;12.5%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Disclosure: I invest in QVAL and IVAL.&lt;/p&gt;

&lt;h2 id=&quot;errata&quot;&gt;Errata&lt;/h2&gt;

&lt;p&gt;2022-04-15:&lt;/p&gt;

&lt;p&gt;The original version of this post claimed that concentrated and broad factor portfolios have similar risk-adjusted returns. That is incorrect: equal-weighted concentrated portfolios have higher risk-adjusted returns than value-weighted broad portfolios.&lt;/p&gt;

&lt;p&gt;Originally, I calculated the Sharpe ratio using the geometric mean, that is, &lt;code&gt;(geometric mean - risk-free rate) / standard deviation&lt;/code&gt;. I should have calculated it using the arithmetic mean: &lt;code&gt;(arithmetic mean - risk-ree rate) / standard deviation&lt;/code&gt;. The geometric Sharpe ratio understates the difference in risk-adjusted return between (relatively) low-volatility and high-volatility portfolios.&lt;/p&gt;

&lt;p&gt;This change suggests that small investors should favor concentrated portfolios over leveraged broad portfolios even if they have access to cheap leverage.&lt;/p&gt;

&lt;p&gt;I added a new table to report Sharpe ratios for various portfolios. I also corrected the table in Appendix B and updated it to include historical data through 2022-02.&lt;/p&gt;

&lt;h2 id=&quot;appendix-a-significance-tests&quot;&gt;Appendix A: Significance tests&lt;/h2&gt;

&lt;p&gt;This table shows p-values for the excess return of top 10% equal-weighted over top 50% value-weighted for various metrics.&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Tested using monthly data.&lt;/li&gt;
  &lt;li&gt;Calculated over log returns on the assumption that log returns follow a normal distribution.&lt;/li&gt;
  &lt;li&gt;Using a two-sided T-test on the null hypothesis that the excess return of the concentrated portfolio is zero.&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;metric&lt;/th&gt;
      &lt;th&gt;mean&lt;/th&gt;
      &lt;th&gt;stdev&lt;/th&gt;
      &lt;th&gt;n&lt;/th&gt;
      &lt;th&gt;p-val&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;B/M&lt;/td&gt;
      &lt;td&gt;0.66&lt;/td&gt;
      &lt;td&gt;5.401&lt;/td&gt;
      &lt;td&gt;1132&lt;/td&gt;
      &lt;td&gt;3.2e-5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;E/P&lt;/td&gt;
      &lt;td&gt;0.32&lt;/td&gt;
      &lt;td&gt;3.294&lt;/td&gt;
      &lt;td&gt;832&lt;/td&gt;
      &lt;td&gt;6.8e-3&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;CF/P&lt;/td&gt;
      &lt;td&gt;0.43&lt;/td&gt;
      &lt;td&gt;3.536&lt;/td&gt;
      &lt;td&gt;832&lt;/td&gt;
      &lt;td&gt;6.4e-4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Mom&lt;/td&gt;
      &lt;td&gt;0.53&lt;/td&gt;
      &lt;td&gt;3.615&lt;/td&gt;
      &lt;td&gt;1119&lt;/td&gt;
      &lt;td&gt;9.6e-7&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Interestingly, among the two null hypotheses&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;a concentrated factor portfolio has the same return as a broad factor portfolio, and&lt;/li&gt;
  &lt;li&gt;a concentrated factor portfolio has the same return as the broad market,&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;the first hypothesis is rejected much more strongly. This happens because the two factor portfolios are highly correlated, so their difference has a relatively low standard deviation.&lt;/p&gt;

&lt;p&gt;The excess returns of (a) top 10% VW over top 50% VW, (b) top 10% EW over top 50% EW, (c) top 50% EW over top 50% VW, and (d) top 10% EW over top 10% VW were generally statistically significant, but to a lesser degree. The next table gives p-values for these four portfolio differences using each of the four factors:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;B/M&lt;/th&gt;
      &lt;th&gt;E/P&lt;/th&gt;
      &lt;th&gt;CF/P&lt;/th&gt;
      &lt;th&gt;Mom&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;10% VW&lt;/td&gt;
      &lt;td&gt;50% VW&lt;/td&gt;
      &lt;td&gt;0.28&lt;/td&gt;
      &lt;td&gt;0.08&lt;/td&gt;
      &lt;td&gt;0.03&lt;/td&gt;
      &lt;td&gt;2e-4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% EW&lt;/td&gt;
      &lt;td&gt;50% EW&lt;/td&gt;
      &lt;td&gt;2e-3&lt;/td&gt;
      &lt;td&gt;2e-3&lt;/td&gt;
      &lt;td&gt;3e-3&lt;/td&gt;
      &lt;td&gt;2e-4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;50% EW&lt;/td&gt;
      &lt;td&gt;50% VW&lt;/td&gt;
      &lt;td&gt;2e-5&lt;/td&gt;
      &lt;td&gt;0.06&lt;/td&gt;
      &lt;td&gt;3e-3&lt;/td&gt;
      &lt;td&gt;1e-4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% EW&lt;/td&gt;
      &lt;td&gt;10% VW&lt;/td&gt;
      &lt;td&gt;2e-5&lt;/td&gt;
      &lt;td&gt;0.11&lt;/td&gt;
      &lt;td&gt;0.03&lt;/td&gt;
      &lt;td&gt;0.03&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(9 out of 16 are significant at p=0.01, and 5 are significant at p=0.001.)&lt;/p&gt;

&lt;p&gt;This suggests that both methods of increasing concentration (moving from 50% to 10%, and from value-weight to equal-weight) by themselves increased return.&lt;/p&gt;

&lt;p&gt;The next table shows p-values from significance tests for standard deviation over B/M (equal-weighted). Differences in standard deviations between portfolios were highly significant.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Tested using monthly data, n=1132.&lt;/li&gt;
  &lt;li&gt;Differences in standard deviations were tested using a two-sided F-test.&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;p-val&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;50% BM&lt;/td&gt;
      &lt;td&gt;10% BM, small 10%&lt;/td&gt;
      &lt;td&gt;1e-22&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;50% BM&lt;/td&gt;
      &lt;td&gt;10% BM&lt;/td&gt;
      &lt;td&gt;1e-11&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;30% BM&lt;/td&gt;
      &lt;td&gt;10% BM&lt;/td&gt;
      &lt;td&gt;1e-6&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% BM&lt;/td&gt;
      &lt;td&gt;10% BM, small 10%&lt;/td&gt;
      &lt;td&gt;1e-8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;30% BM&lt;/td&gt;
      &lt;td&gt;30% BM, small 10%&lt;/td&gt;
      &lt;td&gt;1e-10&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Caveat #1: These t-tests assume that stock returns follow a normal distribution. Although this is a common assumption, it’s not quite accurate, as stocks tend to experience tail events more frequently than a normal distribution would predict. That means the “true” p-values are higher than what’s reported above, and the lower the reported p-value, the more wrong it is. For example, at one point the concentrated B/M portfolio experienced an excess drawdown of 43% over a 10-month period, which “should” only happen once every 21,000 years.&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; (Similarly, US equities “should” experience an 80% drawdown only once every 70,000 years, and yet it happened during the Great Depression.)&lt;/p&gt;

&lt;p&gt;Caveat #2: I am not particularly well-versed in significance testing, so I could have made mistakes in these calculations.&lt;/p&gt;

&lt;h2 id=&quot;appendix-b-replication-on-international-equities&quot;&gt;Appendix B: Replication on international equities&lt;/h2&gt;

&lt;p&gt;The Ken French Data Library only includes international equity returns back to 1990 (instead of 1926), and only reports value/size quintiles rather than deciles. So this international replication is more limited, but it illustrates the same patterns.&lt;/p&gt;

&lt;p&gt;This table gives summary statistics for a variety of equal-weighted and value-weighted portfolios on B/M and size at various levels of concentration. As with US equities, geometric return increased with higher concentration, smaller size, and equal weighting. These three relationships were all statistically significant at p&amp;lt;0.001, even when excluding the first portfolio in the table (which had a much higher return than the others).&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Sharpe_ratio&quot;&gt;Sharpe ratio&lt;/a&gt;&lt;/th&gt;
      &lt;th&gt;Return&lt;/th&gt;
      &lt;th&gt;Standard Deviation&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;20% BM, small 20% EW&lt;/td&gt;
      &lt;td&gt;0.97&lt;/td&gt;
      &lt;td&gt;18.1%&lt;/td&gt;
      &lt;td&gt;16.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;20% BM            EW&lt;/td&gt;
      &lt;td&gt;0.50&lt;/td&gt;
      &lt;td&gt;10.2%&lt;/td&gt;
      &lt;td&gt;17.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% BM, small 40% EW&lt;/td&gt;
      &lt;td&gt;0.62&lt;/td&gt;
      &lt;td&gt;11.6%&lt;/td&gt;
      &lt;td&gt;16.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% BM            EW&lt;/td&gt;
      &lt;td&gt;0.47&lt;/td&gt;
      &lt;td&gt;9.2%&lt;/td&gt;
      &lt;td&gt;17.0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;20% BM, small 20% VW&lt;/td&gt;
      &lt;td&gt;0.63&lt;/td&gt;
      &lt;td&gt;11.4%&lt;/td&gt;
      &lt;td&gt;15.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;20% BM            VW&lt;/td&gt;
      &lt;td&gt;0.40&lt;/td&gt;
      &lt;td&gt;8.1%&lt;/td&gt;
      &lt;td&gt;16.9%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% BM, small 40% VW&lt;/td&gt;
      &lt;td&gt;0.48&lt;/td&gt;
      &lt;td&gt;9.0%&lt;/td&gt;
      &lt;td&gt;15.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% BM            VW&lt;/td&gt;
      &lt;td&gt;0.39&lt;/td&gt;
      &lt;td&gt;7.7%&lt;/td&gt;
      &lt;td&gt;16.3%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id=&quot;appendix-c-factor-regression-on-selected-portfolios&quot;&gt;Appendix C: Factor regression on selected portfolios&lt;/h2&gt;

&lt;p&gt;This table provides a monthly time series factor regression on several portfolios, using factors as defined in the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French data library&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Beta: market beta factor.&lt;/li&gt;
  &lt;li&gt;SMB: size factor.&lt;/li&gt;
  &lt;li&gt;HML: value factor.&lt;/li&gt;
  &lt;li&gt;Alpha: excess return not explained by these three factors.&lt;/li&gt;
  &lt;li&gt;P-values are reported for the null hypothesis that alpha = 0.&lt;/li&gt;
  &lt;li&gt;I use 30% as the diversified portfolio rather than 50% because I cannot fully accurately construct a 50% value-weighted portfolio using the Ken French data. For the data on historical returns, I wasn’t concerned about this because it doesn’t make much difference; but for a factor regression, I want to be as accurate as possible.&lt;/li&gt;
&lt;/ul&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Beta&lt;/th&gt;
      &lt;th&gt;SMB&lt;/th&gt;
      &lt;th&gt;HML&lt;/th&gt;
      &lt;th&gt;Alpha&lt;/th&gt;
      &lt;th&gt;p-val&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;30% BM VW&lt;/td&gt;
      &lt;td&gt;1.07&lt;/td&gt;
      &lt;td&gt;0.22&lt;/td&gt;
      &lt;td&gt;0.79&lt;/td&gt;
      &lt;td&gt;-0.05&lt;/td&gt;
      &lt;td&gt;0.07&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% BM VW&lt;/td&gt;
      &lt;td&gt;1.18&lt;/td&gt;
      &lt;td&gt;0.55&lt;/td&gt;
      &lt;td&gt;1.08&lt;/td&gt;
      &lt;td&gt;-0.23&lt;/td&gt;
      &lt;td&gt;0.002&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% BM, small 10% VW&lt;/td&gt;
      &lt;td&gt;0.97&lt;/td&gt;
      &lt;td&gt;1.57&lt;/td&gt;
      &lt;td&gt;1.14&lt;/td&gt;
      &lt;td&gt;0.05&lt;/td&gt;
      &lt;td&gt;0.74&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;30% BM EW&lt;/td&gt;
      &lt;td&gt;1.01&lt;/td&gt;
      &lt;td&gt;1.06&lt;/td&gt;
      &lt;td&gt;0.85&lt;/td&gt;
      &lt;td&gt;0.24&lt;/td&gt;
      &lt;td&gt;2e-5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% BM EW&lt;/td&gt;
      &lt;td&gt;1.03&lt;/td&gt;
      &lt;td&gt;1.33&lt;/td&gt;
      &lt;td&gt;1.11&lt;/td&gt;
      &lt;td&gt;0.31&lt;/td&gt;
      &lt;td&gt;0.004&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% BM, small 10% EW&lt;/td&gt;
      &lt;td&gt;0.97&lt;/td&gt;
      &lt;td&gt;1.71&lt;/td&gt;
      &lt;td&gt;1.21&lt;/td&gt;
      &lt;td&gt;0.59&lt;/td&gt;
      &lt;td&gt;0.0008&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Based on this limited sample, it appears that more concentrated portfolios do not have higher market beta, but do tend to have greater exposure to the size and value factors, as well as higher alpha. Most notably, the equal-weighted portfolios have more alpha than the value-weighted ones.&lt;/p&gt;

&lt;p&gt;If we add leverage to the 30% VW portfolio in an attempt to replicate a more concentrated portfolio like 10% EW, the former will have less alpha and more market beta. This may be undesirable in the context of a broader portfolio that includes other allocations to equities.&lt;/p&gt;

&lt;p&gt;I found this result surprising. I would have predicted that concentrated portfolios’ higher return would primarily come from higher factor exposure, especially exposure to the value factor (HML). While my prediction is at least partially true, even the most concentrated portfolio only had 1.21x loading on HML, and a significant portion of the outperformance could not be explained by any factor.&lt;/p&gt;

&lt;p&gt;Conventionally, HML is defined as the return of the top 30% of value stocks minus the bottom 30%, value-weighted. If we redefine HML as the top 30% minus the bottom 30% &lt;em&gt;equal-weighted&lt;/em&gt; (keeping the other factor definitions the same), we get an interesting result:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Beta&lt;/th&gt;
      &lt;th&gt;SMB&lt;/th&gt;
      &lt;th&gt;HML EW&lt;/th&gt;
      &lt;th&gt;Alpha&lt;/th&gt;
      &lt;th&gt;p-val&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;30% BM VW&lt;/td&gt;
      &lt;td&gt;1.13&lt;/td&gt;
      &lt;td&gt;0.03&lt;/td&gt;
      &lt;td&gt;0.53&lt;/td&gt;
      &lt;td&gt;-0.21&lt;/td&gt;
      &lt;td&gt;8e-5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% BM VW&lt;/td&gt;
      &lt;td&gt;1.26&lt;/td&gt;
      &lt;td&gt;0.27&lt;/td&gt;
      &lt;td&gt;0.78&lt;/td&gt;
      &lt;td&gt;-0.47&lt;/td&gt;
      &lt;td&gt;9e-8&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% BM, small 10% VW&lt;/td&gt;
      &lt;td&gt;1.03&lt;/td&gt;
      &lt;td&gt;1.15&lt;/td&gt;
      &lt;td&gt;1.08&lt;/td&gt;
      &lt;td&gt;-0.37&lt;/td&gt;
      &lt;td&gt;0.004&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;30% BM EW&lt;/td&gt;
      &lt;td&gt;1.06&lt;/td&gt;
      &lt;td&gt;0.79&lt;/td&gt;
      &lt;td&gt;0.72&lt;/td&gt;
      &lt;td&gt;-0.02&lt;/td&gt;
      &lt;td&gt;0.7&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% BM EW&lt;/td&gt;
      &lt;td&gt;1.09&lt;/td&gt;
      &lt;td&gt;0.96&lt;/td&gt;
      &lt;td&gt;0.99&lt;/td&gt;
      &lt;td&gt;-0.06&lt;/td&gt;
      &lt;td&gt;0.6&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% BM, small 10% EW&lt;/td&gt;
      &lt;td&gt;1.02&lt;/td&gt;
      &lt;td&gt;1.24&lt;/td&gt;
      &lt;td&gt;1.23&lt;/td&gt;
      &lt;td&gt;0.09&lt;/td&gt;
      &lt;td&gt;0.6&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;We can see that higher factor loadings on SMB and HML EW fully explain the return of the more concentrated portfolios. Meanwhile, the value-weighted portfolios have statistically significant &lt;em&gt;negative&lt;/em&gt; alpha.&lt;/p&gt;

&lt;p&gt;One possible explanation is that switching from value-weighting to equal-weighting “unlocks” more of the value premium by allowing investors to assign relatively higher weight to the most undervalued stocks, and this component of the value premium is invisible to HML VW.&lt;/p&gt;

&lt;p&gt;The other value metrics, E/P and CF/P, exhibit a similar effect: when we regress on a value-weighted HML factor using E/P or CF/P (instead of B/M), the equal-weighted portfolios have statistically significant alpha at p&amp;lt;0.001, with 10% EW having more alpha than 30% EW. On an equal-weighted HML factor, value-weighted portfolios have negative alphas, but none are statistically significant at p&amp;lt;0.001 (or even p&amp;lt;0.01).&lt;/p&gt;

&lt;p&gt;The following table gives factor regressions for selected momentum portfolios. “Mom” is the momentum factor.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Beta&lt;/th&gt;
      &lt;th&gt;SMB&lt;/th&gt;
      &lt;th&gt;Mom&lt;/th&gt;
      &lt;th&gt;Alpha&lt;/th&gt;
      &lt;th&gt;p-val&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;30% Mom VW&lt;/td&gt;
      &lt;td&gt;1.07&lt;/td&gt;
      &lt;td&gt;0.08&lt;/td&gt;
      &lt;td&gt;0.37&lt;/td&gt;
      &lt;td&gt;0.03&lt;/td&gt;
      &lt;td&gt;0.35&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;20% Mom, small 20% VW&lt;/td&gt;
      &lt;td&gt;1.09&lt;/td&gt;
      &lt;td&gt;1.32&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;0.44&lt;/td&gt;
      &lt;td&gt;1e-4&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;30% Mom EW&lt;/td&gt;
      &lt;td&gt;1.03&lt;/td&gt;
      &lt;td&gt;0.74&lt;/td&gt;
      &lt;td&gt;0.27&lt;/td&gt;
      &lt;td&gt;0.31&lt;/td&gt;
      &lt;td&gt;1e-11&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;10% Mom EW&lt;/td&gt;
      &lt;td&gt;1.11&lt;/td&gt;
      &lt;td&gt;0.94&lt;/td&gt;
      &lt;td&gt;0.45&lt;/td&gt;
      &lt;td&gt;0.28&lt;/td&gt;
      &lt;td&gt;1e-5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;20% Mom, small 20% EW&lt;/td&gt;
      &lt;td&gt;1.05&lt;/td&gt;
      &lt;td&gt;1.38&lt;/td&gt;
      &lt;td&gt;0.17&lt;/td&gt;
      &lt;td&gt;0.65&lt;/td&gt;
      &lt;td&gt;1e-6&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Here we see qualitatively similar results to the first table, where more concentrated portfolios tend to have stronger exposure to the size and momentum factors, as well as significantly positive alpha.&lt;/p&gt;

&lt;p&gt;When we use an equal-weighted momentum factor instead of value-weighted, concentrated portfolios still have statistically significant alpha. However, when we replace the momentum factor with an equal-weighted long-only factor (that is, 30% Mom EW minus the risk-free rate), none of the momentum portfolios have alpha (either positive or negative).&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt; Relatedly, while equal-weighted momentum portfolios did earn higher risk-adjusted returns than comparable value-weighted strategies, the difference was not as large as for B/M. These two observations suggest that switching from value-weighting to equal-weighting does not matter as much for momentum as it does for value. This makes intuitive sense: equal-weighting implicitly tilts toward small-cap value, so it unlocks more of the value premium, but there’s no reason to expect it to do the same for momentum.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ang, Hodrick, Xing &amp;amp; Zhang (2006). &lt;a href=&quot;https://www.nber.org/system/files/working_papers/w10852/w10852.pdf&quot;&gt;The Cross-Section of Volatility and Expected Returns.&lt;/a&gt; &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Frazzini &amp;amp; Pedersen (2013). &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/BettingAgainstBeta.pdf&quot;&gt;Betting Against Beta.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Fama and French (1992). &lt;a href=&quot;http://www.business.unr.edu/faculty/liuc/files/badm742/fama_french_1992.pdf&quot;&gt;The Cross-Section of Expected Stock Returns.&lt;/a&gt; &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jegadeesh and Titman (1993). &lt;a href=&quot;http://www.business.unr.edu/faculty/liuc/files/BADM742/Jegadeesh_Titman_1993.pdf&quot;&gt;Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency.&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Asness, Moskowitz, and Pedersen (2013). &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf&quot;&gt;Value and Momentum Everywhere.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Baltussen, Swinkels, and van Vliet (2019). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3325720&quot;&gt;Global Factor Premiums.&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;All returns reported in this essay are geometric returns, not arithmetic. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;B/M and Mom historical returns were calculated over the period 1927–2020. E/P and CF/P were calculated over 1951–2020. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I can only show this chart for B/M because that’s the only metric for which the Ken French Data Library includes the necessary data. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;It makes sense that the highest returns would show up in stocks that where it’s difficult to actually take advantage of their high return. &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The paper &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3229719&quot;&gt;Trading Costs&lt;/a&gt; by AQR Capital Management found that their own trading costs were significant enough that they probably could not run an equal-weighted small cap value strategy like I described above, but they can still implement value and momentum in mid to large caps. AQR invests billions to tens of billions of dollars in each of its funds, so this information doesn’t tell us much about what smaller investors can do, but it at least indicates that investors with billions of dollars probably can’t increase risk and return via concentration in the manner I describe in this essay. Such investors who want to increase return probably need to use leverage. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In Joel Greenblatt’s book, &lt;em&gt;The Little Book that Beats the Market&lt;/em&gt;, in which he proposes following a concentrated value investing strategy, he offers a trick to increase tax efficiency:&lt;/p&gt;

      &lt;blockquote&gt;
        &lt;p&gt;For individual stocks in which we are showing a loss from our initial purchase price, we will want to sell a few days before our one-year holding period is up. For those stocks with a gain, we will want to sell a day or two after the one-year period is up. In that way, all of our gains will receive the advantages of the lower tax rate afforded to long-term capital gains […], and all of our losses will receive short-term tax treatment […].&lt;/p&gt;
      &lt;/blockquote&gt;

      &lt;p&gt;This works for US investors; I can’t comment on how taxes work in other countries. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Perhaps the best argument that momentum will persist: There’s good reason to believe that the value premium will persist, and if value can persist, why not momentum? &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If 30 stocks are chosen uniformly at random, they would have outperformed historically, because equal weighting has outperformed market cap weighting. But if the 30 stocks were selected at random in proportion to market cap, then they’d have the same expected return as the market, but with higher volatility. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;All p-values are rounded up. For any p-values smaller than 1e-5, I rounded up to the nearest power of 10. &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We have discussed concentration along three dimensions: factor concentration, weighting, and market cap. This table compares the most concentrated to the least concentrated portfolios along the first two dimensions, but does not use market cap because not all factors had data on portfolios sorted by market cap. &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, the probability (assuming a normal distribution) is higher than that, because I cherry-picked the worst rolling 10-month period rather than looking at non-overlapping periods. &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This surprised me—I thought some of the relationships would be statistically insignificant because the data sample only covers 30 years. For comparison, the long/short value factor (HML) was statistically insignificant over this sample (p=0.34), as was market beta (p=0.09). &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I did not establish a significance threshold in advance. p&amp;lt;0.001 was the smallest power of 10 at which all three tests were significant. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;All alphas fall between -0.05 and 0.07, with the lowest p-value being 0.28. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Asset Allocation and Leverage for Altruists with Constraints</title>
				<pubDate>Mon, 14 Dec 2020 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints/</guid>
                <description>
                  
                  
                  
                  &lt;h1 id=&quot;summary&quot;&gt;Summary&lt;/h1&gt;

&lt;p&gt;Altruistic investors differ from ordinary investors in that they don’t just care about their own investments, but about the investments of all altruists.&lt;/p&gt;

&lt;p&gt;We can use our own investments to improve the overall altruistic portfolio in two key ways:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Increase expected return by investing in high-return assets or by using leverage.&lt;/li&gt;
  &lt;li&gt;Reduce risk by investing in assets with low correlation to the altruistic portfolio.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At the margin, we have to choose between either increasing expected return or reducing correlation. How do we make that decision?&lt;/p&gt;

&lt;p&gt;We can extend the commonly-used technique of &lt;a href=&quot;http://www.columbia.edu/~mh2078/FoundationsFE/MeanVariance-CAPM.pdf&quot;&gt;mean-variance optimization (MVO)&lt;/a&gt; to derive optimal asset allocations under various assumptions. We don’t know which assumptions apply to the real world, but we can draw some general lessons. The result suggest that we should try to both increase expected return and decrease correlation, but that we should prioritize increasing expected return.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This should not be taken as investment advice. Any given portfolio results are hypothetical and do not represent returns achieved by an actual investor. Any asset allocation described as “optimal”, how an investor “should” invest, or similarly, is only considered such for the goal of maximizing geometric return under specific theoretical conditions, and may not be optimal for any actual investors.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/gBDgyGQ6mEi9bpufB/asset-allocation-and-leverage-for-altruists-with-constraints&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h1 id=&quot;contents&quot;&gt;Contents&lt;/h1&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#key-takeaways&quot; id=&quot;markdown-toc-key-takeaways&quot;&gt;Key takeaways&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#problem-setup&quot; id=&quot;markdown-toc-problem-setup&quot;&gt;Problem setup&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#the-optimization-problem&quot; id=&quot;markdown-toc-the-optimization-problem&quot;&gt;The optimization problem&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#assumptions&quot; id=&quot;markdown-toc-assumptions&quot;&gt;Assumptions&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#sample-assets&quot; id=&quot;markdown-toc-sample-assets&quot;&gt;Sample assets&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#optimal-asset-allocation&quot; id=&quot;markdown-toc-optimal-asset-allocation&quot;&gt;Optimal asset allocation&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#with-a-leverage-constraint&quot; id=&quot;markdown-toc-with-a-leverage-constraint&quot;&gt;With a leverage constraint&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#leverage-constraint-but-with-shorts-allowed&quot; id=&quot;markdown-toc-leverage-constraint-but-with-shorts-allowed&quot;&gt;Leverage constraint, but with shorts allowed&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#with-a-risk-constraint&quot; id=&quot;markdown-toc-with-a-risk-constraint&quot;&gt;With a risk constraint&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#sensitivity-analysis&quot; id=&quot;markdown-toc-sensitivity-analysis&quot;&gt;Sensitivity analysis&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#using-more-traditional-assets&quot; id=&quot;markdown-toc-using-more-traditional-assets&quot;&gt;Using more traditional assets&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-returncorrelation-tradeoff&quot; id=&quot;markdown-toc-the-returncorrelation-tradeoff&quot;&gt;The return/correlation tradeoff&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#impact-of-assumptions&quot; id=&quot;markdown-toc-impact-of-assumptions&quot;&gt;Impact of assumptions&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#leverage-and-shorts-are-free&quot; id=&quot;markdown-toc-leverage-and-shorts-are-free&quot;&gt;Leverage and shorts are free&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#we-control-only-a-small-portion-of-the-total-altruistic-portfolio&quot; id=&quot;markdown-toc-we-control-only-a-small-portion-of-the-total-altruistic-portfolio&quot;&gt;We control only a small portion of the total altruistic portfolio&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#assets-follow-log-normal-distributions&quot; id=&quot;markdown-toc-assets-follow-log-normal-distributions&quot;&gt;Assets follow log-normal distributions&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#asset-correlations-are-stable-over-time&quot; id=&quot;markdown-toc-asset-correlations-are-stable-over-time&quot;&gt;Asset correlations are stable over time&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#optimization-techniques-can-predict-optimal-future-asset-allocations&quot; id=&quot;markdown-toc-optimization-techniques-can-predict-optimal-future-asset-allocations&quot;&gt;Optimization techniques can predict optimal future asset allocations&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#altruists-have-logarithmic-utility&quot; id=&quot;markdown-toc-altruists-have-logarithmic-utility&quot;&gt;Altruists have logarithmic utility&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusions&quot; id=&quot;markdown-toc-conclusions&quot;&gt;Conclusions&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix&quot; id=&quot;markdown-toc-appendix&quot;&gt;Appendix&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-a-derivation-of-meanstandard-deviationcorrelation-estimates&quot; id=&quot;markdown-toc-appendix-a-derivation-of-meanstandard-deviationcorrelation-estimates&quot;&gt;Appendix A: Derivation of mean/standard deviation/correlation estimates&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-b-sensitivity-analysis-of-risk-constrained-allocation&quot; id=&quot;markdown-toc-appendix-b-sensitivity-analysis-of-risk-constrained-allocation&quot;&gt;Appendix B: Sensitivity analysis of risk-constrained allocation&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#variations-on-market--valmom--vmot--manfut&quot; id=&quot;markdown-toc-variations-on-market--valmom--vmot--manfut&quot;&gt;Variations on Market + Val/Mom + VMOT + ManFut&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#variations-on-us--international--commodities--bonds&quot; id=&quot;markdown-toc-variations-on-us--international--commodities--bonds&quot;&gt;Variations on US + International + Commodities + Bonds&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;key-takeaways&quot;&gt;Key takeaways&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;Altruistic investors care not just about their own investment performance, but about the performance of all other value-aligned altruistic investors. This suggests that altruists should pay particular attention to assets that are &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/&quot;&gt;uncorrelated&lt;/a&gt; to typical investments. &lt;a href=&quot;#introduction&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;(http://www.columbia.edu/~mh2078/FoundationsFE/MeanVariance-CAPM.pdf)&quot;&gt;Mean-variance optimization (MVO)&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; is the standard approach for determining the theoretically optimal asset allocation. But MVO only works for a fully self-contained portfolio: it cannot determine how altruistic investors can maximize the expected utility of the overall altruistic portfolio. We can transform this unsolvable problem into a solvable one if we model altruists as having logarithmic utility of money. In that case, small investors want to find the personal asset allocation that maximizes the &lt;a href=&quot;https://en.wikipedia.org/wiki/Geometric_mean&quot;&gt;geometric mean&lt;/a&gt; return of the altruistic portfolio. &lt;a href=&quot;#problem-setup&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Under certain assumptions, this model produces the following optimal asset allocations given various leverage and risk constraints:
    &lt;ol&gt;
      &lt;li&gt;Investors who cannot use leverage should (probably) invest all their money into whichever asset has the highest geometric mean return. &lt;a href=&quot;#with-a-leverage-constraint&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;Investors who can use leverage but cannot short-sell should invest in the same way as un-leveraged investors, but with more leverage. &lt;a href=&quot;#with-a-leverage-constraint&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;Investors who can use both leverage and shorts, but who do not want their portfolios to exceed some maximum volatility, should invest in a more complex mix of assets. The optimal allocation probably includes assets with high expected return and with low correlation, but with more emphasis on high return. It also includes a moderate short market position to reduce correlation to other altruistic investors. &lt;a href=&quot;#with-a-risk-constraint&quot;&gt;[More]&lt;/a&gt;
        &lt;ul&gt;
          &lt;li&gt;Notably, the short position is much smaller than the long position. The optimal allocation is not totally uncorrelated to the typical altruistic portfolio, sacrificing some diversification in the interest of increasing expected return.&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;There is a tradeoff between increasing return and decreasing correlation. Within this model, each unit of correlation is worth about 2 to 3 units of return (e.g., decreasing correlation by 0.1 is as good as increasing return by 0.2 to 0.3 percentage points). &lt;a href=&quot;#the-returncorrelation-tradeoff&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;For some assumptions, we can see how altering them changes the theoretically optimal portfolio: &lt;a href=&quot;#sensitivity-analysis&quot;&gt;[More]&lt;/a&gt;
    &lt;ol&gt;
      &lt;li&gt;If we expect any &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/#factor-investing&quot;&gt;factors&lt;/a&gt; to outperform the market, the optimal portfolio will include substantial allocations to these factors, even if we only expect them to outperform by a thin margin. But reducing expected factor performance reduces the magnitude of the short market position.&lt;/li&gt;
      &lt;li&gt;The optimal portfolios described above assumed that typical investors both overly concentrate in certain asset classes (e.g., US stocks) and neglect the highest-return opportunities. If most altruistic dollars are concentrated in the highest-return asset(s) and use appropriate leverage, then marginal investors should invest in diversifying assets while shorting the overweighted assets (even though those assets have the highest expected return).&lt;/li&gt;
      &lt;li&gt;Similarly, if most altruists diversify well but don’t take on enough risk, then marginal altruists should concentrate in high-return assets.&lt;/li&gt;
      &lt;li&gt;If we account for the costs of leverage and short positions, then the optimal allocation uses a little bit less leverage and shorting, but not much less. &lt;a href=&quot;#leverage-and-shorts-are-free&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;For large investors, maximizing the geometric mean of the total altruistic portfolio looks more similar to simply maximizing the geometric mean of the investments they control. &lt;a href=&quot;#we-control-only-a-small-portion-of-the-total-altruistic-portfolio&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;For other assumptions, if we alter them, we can no longer derive the geometric mean maximizing portfolio via the same method. But we can consider how the change qualitatively affects the optimal asset allocation:
    &lt;ol&gt;
      &lt;li&gt;Most assets experience deep drawdowns more frequently than theoretical models predict. Therefore, in practice, the optimal asset allocation probably takes on less risk than the models suggest. &lt;a href=&quot;#assets-follow-log-normal-distributions&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;Most assets’ correlations tend to increase during bad times. Therefore, diversification does not reduce risk by as much as theoretical models indicate. However, some assets do not tend to experience negative correlations during equity market drawdowns, so these still have strong diversification benefits. &lt;a href=&quot;#asset-correlations-are-stable-over-time&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;The output of an optimization model heavily depends on the provided return and risk estimates. Given that these estimates are uncertain, it makes sense to diversify more than theoretical models suggest. &lt;a href=&quot;#optimization-techniques-can-predict-optimal-future-asset-allocations&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;Maximizing geometric mean is only optimal for actors with logarithmic utility. Altruists might be more or less risk-averse than this, in which case they should diversify more or less (respectively). &lt;a href=&quot;#altruists-have-logarithmic-utility&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;This approach of geometric mean maximization has substantial limitations and uncertainties, so we should not draw overly strong conclusions. That said, this analysis provides at least &lt;em&gt;moderate&lt;/em&gt; evidence that:
    &lt;ul&gt;
      &lt;li&gt;Altruistic investors who cannot use leverage should invest in whatever asset has highest expected (geometric) return, as long as they are comfortable with the risk.&lt;/li&gt;
      &lt;li&gt;Altruistic investors who can use leverage and shorts should invest in a somewhat diversified portfolio of assets with high expected return, along with a modest short market position to reduce their correlation to other altruistic investors.&lt;/li&gt;
      &lt;li&gt;Increasing return is moderately more important than decreasing correlation, but not overwhelmingly so.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;In some ways, altruists and traditional investors have the same incentives regarding how they should invest—they want to invest in the portfolio with the best balance of return and risk. But self-interested people only care about how much money they have, while altruists care about how much money all other (value-aligned) altruists have.&lt;/p&gt;

&lt;p&gt;That means altruists should not invest their money in the optimal overall portfolio. Instead, they should attempt to push the pool of value-aligned philanthropic money in the direction of optimal.&lt;/p&gt;

&lt;p&gt;I have discussed this fact in the past, especially regarding how it affects &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;optimal leverage&lt;/a&gt; and the importance of &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/&quot;&gt;uncorrelated investments&lt;/a&gt;. Namely: most altruistic investors probably don’t use enough leverage, and don’t diversify well enough. So at the margin, altruists should pay extra attention to leverage and diversification, more so than they would if they only cared about their own investments.&lt;/p&gt;

&lt;p&gt;In theory, altruists should invest in assets with zero correlation to the typical altruistic portfolio, and then apply an absurdly high amount of leverage (like 100:1 or more). In practice, we can’t do this. If we want to decrease correlation by short-selling popular investments, we also reduce expected return, and we can’t fully make up for it by adding leverage. So we must make tradeoffs between increasing expected return and increasing diversification? How do we decide?&lt;/p&gt;

&lt;p&gt;In this essay, I will attempt to precisely determine, under specific conditions, the optimal asset allocation for altruistic investors at the margin.&lt;/p&gt;

&lt;h1 id=&quot;problem-setup&quot;&gt;Problem setup&lt;/h1&gt;

&lt;h2 id=&quot;the-optimization-problem&quot;&gt;The optimization problem&lt;/h2&gt;

&lt;p&gt;At a basic level, the problem we are facing is not unique to altruists: we want to construct the best portfolio from among a set of available assets. Traditionally, we could (in theory) solve this problem using &lt;a href=&quot;http://www.columbia.edu/~mh2078/FoundationsFE/MeanVariance-CAPM.pdf&quot;&gt;mean-variance optimization (MVO)&lt;/a&gt;.&lt;sup id=&quot;fnref:10:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; If we know the expected return, standard deviation, and correlation of two or more asset classes, we can use MVO to determine how much to allocate to each of them. (In practice, MVO doesn’t actually work very well, but more on that later.)&lt;/p&gt;

&lt;p&gt;MVO can work in two ways:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Find the allocation that maximizes expected (arithmetic mean) return for a given level of volatility—or, &lt;a href=&quot;https://en.wikipedia.org/wiki/Duality_(optimization)&quot;&gt;equivalently&lt;/a&gt;, the allocation that minimizes volatility for a given expected return.&lt;/li&gt;
  &lt;li&gt;Find the allocation with the highest risk-adjusted return (or &lt;a href=&quot;https://en.wikipedia.org/wiki/Sharpe_ratio&quot;&gt;Sharpe ratio&lt;/a&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Unfortunately, neither of these techniques can help us optimize the altruistic portfolio. We want to find the asset allocation among our own assets that maximizes the expected utility of the total altruistic portfolio. The first MVO method—maximizing expected return for a given volatility—only works if we already know what level of total volatility maximizes expected utility.&lt;/p&gt;

&lt;p&gt;The second method—maximize Sharpe ratio—works for independent investors, because (under certain assumptions) the maximum-Sharpe-ratio allocation also maximizes utility if you add cash or add leverage to adjust the level of risk. But this doesn’t work for altruistic investors because if we maximize the Sharpe ratio of our own holdings, that probably doesn’t maximize the Sharpe ratio of the overall altruistic allocation. And if we maximize the altruistic Sharpe ratio, we can’t mix in cash or leverage on the whole portfolio, only on our own money, so knowing the maximum-Sharpe portfolio doesn’t tell us how to maximize utility.&lt;/p&gt;

&lt;p&gt;We can solve the problem by starting with a reasonable assumption: altruists have logarithmic utility of money. This is a common assumption, and it might not be exactly true, but it seems at least approximately correct. With logarithmic utility, we want to find the asset allocation that maximizes the &lt;a href=&quot;https://en.wikipedia.org/wiki/Geometric_mean&quot;&gt;geometric mean&lt;/a&gt;. (For interested readers, this is equivalent to the &lt;a href=&quot;https://en.wikipedia.org/wiki/Kelly_criterion&quot;&gt;Kelly criterion&lt;/a&gt;, but over long-term investments rather than discrete bets.&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;We could also find the optimal asset allocation if we used a different utility function, but logarithmic utility is simple and intuitively plausible, so it’s a good place to start.&lt;/p&gt;

&lt;p&gt;Even if we want to take on more risk with our altruistic investments than with our personal investments, we still can’t take on arbitrarily high risk. In practice, we face constraints—typically either a maximum allowed quantity of leverage, or a maximum acceptable volatility. Subject to constraints on our own investments, we want to maximize the geometric mean return of the total altruistic portfolio.&lt;/p&gt;

&lt;p&gt;There is no good formula for the exact geometric return of a portfolio, but we can use a close approximation (&lt;a href=&quot;https://web.iese.edu/jestrada/PDF/Research/Refereed/GMM-Extended.pdf&quot;&gt;Estrada, 2010&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;):&lt;/p&gt;

\[\text{Geometric Mean} = \log(1 + \mu) - \displaystyle\frac{\sigma^2}{2(1 + \mu)^2}\]

&lt;p&gt;where \(\mu\) is the portfolio arithmetic mean and \(\sigma\) is the portfolio standard deviation, both of which we can precisely calculate.&lt;/p&gt;

&lt;p&gt;This is a convex optimization problem, which we can solve with an optimization tool such as Python’s &lt;a href=&quot;https://docs.scipy.org/doc/scipy/reference/optimize.html&quot;&gt;scipy.optimize&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;assumptions&quot;&gt;Assumptions&lt;/h2&gt;

&lt;p&gt;In addition to the setup given in the previous section, let’s make some more assumptions to clarify and simplify the problem:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;We care about real return (that is, return after inflation), not nominal.&lt;/li&gt;
  &lt;li&gt;Asset returns follow a &lt;a href=&quot;https://en.wikipedia.org/wiki/Log-normal_distribution&quot;&gt;log-normal distribution&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;The real risk-free rate is 0%. (This is approximately true, and it makes the math easier.)&lt;/li&gt;
  &lt;li&gt;We control only a small portion of the total altruistic portfolio. For the calculations in this essay, I assume we control 1%.&lt;/li&gt;
  &lt;li&gt;If we use leverage, we only have to pay the risk-free rate in interest. We can short-sell securities for free.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Later, I will &lt;a href=&quot;#impact-of-assumptions&quot;&gt;scrutinize these&lt;/a&gt; to see how they might affect the outcome.&lt;/p&gt;

&lt;h2 id=&quot;sample-assets&quot;&gt;Sample assets&lt;/h2&gt;

&lt;p&gt;For this essay, we will mainly consider four investable assets. The first represents the broad market, and the next three represent what I believe to be some of the best investment opportunities that are accessible to most investors. In a previous essay, &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/&quot;&gt;Uncorrelated Investments for Altruists&lt;/a&gt;, I explain why I like these three investments.&lt;/p&gt;

&lt;p&gt;The four assets are:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Market&lt;/strong&gt;: The global equities market. This serves as a proxy for the average altruist’s investment portfolio.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Val/Mom&lt;/strong&gt;: A concentrated long-only value and momentum strategy, similar to the strategies used by various ETFs.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;VMOT&lt;/strong&gt;: A value/momentum strategy with a trendfollowing component, similar to the strategies used by the ETFs &lt;a href=&quot;https://www.etf.com/VMOT&quot;&gt;VMOT&lt;/a&gt;&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://www.etf.com/GMOM&quot;&gt;GMOM&lt;/a&gt;.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:16:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; (VMOT stands for &lt;strong&gt;V&lt;/strong&gt;alue &lt;strong&gt;Mo&lt;/strong&gt;mentum &lt;strong&gt;T&lt;/strong&gt;rend.)&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;ManFut&lt;/strong&gt;: A trendfollowing managed futures strategy, like the strategy studied in &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Time-Series-Momentum&quot;&gt;Moskowitz et al. (2012)&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To find the optimal allocation, we need to make some assumptions about these assets’ (geometric) means, standard deviations, and correlations to each other. I use the following numbers:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Market&lt;/th&gt;
      &lt;th&gt;Val/Mom&lt;/th&gt;
      &lt;th&gt;VMOT&lt;/th&gt;
      &lt;th&gt;ManFut&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Geometric Mean&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
      &lt;td&gt;6%&lt;/td&gt;
      &lt;td&gt;6%&lt;/td&gt;
      &lt;td&gt;3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Standard Deviation&lt;/td&gt;
      &lt;td&gt;16%&lt;/td&gt;
      &lt;td&gt;16%&lt;/td&gt;
      &lt;td&gt;13%&lt;/td&gt;
      &lt;td&gt;15%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;And the correlation matrix:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Market&lt;/th&gt;
      &lt;th&gt;Val/Mom&lt;/th&gt;
      &lt;th&gt;VMOT&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;0.8&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;0.5&lt;/td&gt;
      &lt;td&gt;0.8&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;0.2&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;See &lt;a href=&quot;#appendix-a-derivation-of-meanstandard-deviationcorrelation-estimates&quot;&gt;Appendix A&lt;/a&gt; for details on how I came up with these numbers.&lt;/p&gt;

&lt;p&gt;In a later section, I will look at results &lt;a href=&quot;#using-more-traditional-assets&quot;&gt;using more traditional assets&lt;/a&gt; (stocks, bonds, and commodities), but my analysis will primarily focus on the set of four assets listed above. The specific choice of assets doesn’t actually matter much—we can draw general lessons that apply to any assets.&lt;/p&gt;

&lt;h1 id=&quot;optimal-asset-allocation&quot;&gt;Optimal asset allocation&lt;/h1&gt;

&lt;p&gt;With no constraints, the optimization problem is simple:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Find the allocation that the altruistic portfolio as a whole should have.&lt;/li&gt;
  &lt;li&gt;Take on gigantic leveraged and short positions to the point where you singlehandedly push the altruistic portfolio to the optimum point.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But you can’t do this because it requires using unrealistically large amounts of leverage. In practice, investors—even altruistic investors with high risk tolerance—have constraints. Let’s look at two types of constraints: leverage and risk.&lt;/p&gt;

&lt;p&gt;I will begin by looking at the problem of portfolio optimization with leverage constraints. Then I will derive optimal portfolios under risk constraints, including some variations on the problem. Risk constraints allow for more variations and deeper analysis, so I will spend more time on them.&lt;/p&gt;

&lt;h2 id=&quot;with-a-leverage-constraint&quot;&gt;With a leverage constraint&lt;/h2&gt;

&lt;p&gt;Most investors do not use leveraged or short positions, either because they cannot or because they prefer to stick with simpler investments. If we assume that we cannot use leverage or shorts, what is the optimal asset allocation? More generally, what is the optimal allocation where we can only use up to some maximum amount of leverage?&lt;/p&gt;

&lt;p&gt;Let’s begin with the case of an investor who cannot use any leverage or shorts.&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; Under the assumptions described previously, an investor maximizes the geometric mean of the total altruistic portfolio by investing 100% of their wealth in VMOT. Without the ability to use leverage, altruistic investors should (according to this model) simply invest in whichever asset has the highest return and not too much correlation to the market.&lt;/p&gt;

&lt;p&gt;An investor who can use up to 2:1 leverage (e.g., anyone using &lt;a href=&quot;https://en.wikipedia.org/wiki/Regulation_T&quot;&gt;Reg T margin&lt;/a&gt;) has essentially the same optimal asset allocation, but with more leverage. Namely, they should invest 200% in VMOT.&lt;/p&gt;

&lt;p&gt;This differs substantially from the optimal self-interested portfolio. An investor who simply wishes to maximize their own geometric return should invest 51% in Val/Mom and 49% in VMOT. (Notice that this portfolio does not allocate any to the two lower-return assets, Market and ManFut. Maximizing geometric return tends to result in fairly aggressive portfolios. A more risk-averse investor would want to diversify more.)&lt;/p&gt;

&lt;p&gt;Does the altruistic investor prefer VMOT because it’s the highest-return asset (tied with Val/Mom), and has lower risk than Val/Mom? Or is it because VMOT has a lower correlation to the market? We can test this by supposing that VMOT has a perfect correlation to the market. In that case, the optimal portfolio allocates 100% to Val/Mom instead. So we see that the optimizer cares more about reducing correlation than about reducing volatility. If we revise the expected return of VMOT to 6.5% (along with a perfect correlation to Market), the optimal allocation now puts half into Val/Mom and half into ManFut. Roughly speaking, that means increasing expected return by 0.5 percentage points is as good as reducing correlation by 0.2.&lt;/p&gt;

&lt;h3 id=&quot;leverage-constraint-but-with-shorts-allowed&quot;&gt;Leverage constraint, but with shorts allowed&lt;/h3&gt;

&lt;p&gt;If we maintain the leverage constraint but allow short selling, we get some unusual and perhaps surprising results:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–44360%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;45521%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;–1854%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;793%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;9.64%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;–221.69%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Standard Deviation&lt;/td&gt;
      &lt;td&gt;4435.67%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Recall that our portfolio represents 1% of the overall altruistic portfolio, so a –9900% short market position would be enough to bring the altruistic market allocation down to 0%. This allocation goes far beyond that point, down to a –44360% short position.&lt;/p&gt;

&lt;p&gt;This allocation has an extremely bad geometric mean return of –222%. This is done in the service of diversifying the total altruistic portfolio.&lt;/p&gt;

&lt;p&gt;But this seems slightly unrealistic. Let’s posit that the short side cannot be larger than 100% (that is, your short position cannot exceed the net value of your account). An investor who can short can probably also use leverage, so let’s allow up to 2:1 leverage with a 50% short margin requirement (i.e., if you short $1, you can use up to $0.50 of that to buy more assets). These constraints produce the following allocation:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–100%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;250%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;3.12%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;8.96%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Standard Deviation&lt;/td&gt;
      &lt;td&gt;28.15%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;So when we allow modest short positions, the optimizer will short the market and then buy as much VMOT as possible.&lt;/p&gt;

&lt;h2 id=&quot;with-a-risk-constraint&quot;&gt;With a risk constraint&lt;/h2&gt;

&lt;p&gt;Most investors, including purely altruistic investors, want to limit their exposure to risk. Possible reasons for this include:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Overly risky leveraged portfolios might get &lt;a href=&quot;https://www.investopedia.com/terms/m/margincall.asp&quot;&gt;margin called&lt;/a&gt;. This will never happen to a continuously-rebalanced portfolio, but in real life it’s not possible to rebalance continuously. It’s typical to only rebalance once a month, so we want a portfolio that’s exceedingly unlikely to experience a 100% drawdown within a month.&lt;/li&gt;
  &lt;li&gt;Even if extremely high risk is optimal in theory, most investors aren’t comfortable with that much volatility.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can formally limit risk by mandating that our portfolio does not exceed some maximum (predicted) standard deviation. Standard deviation does not perfectly capture what we mean by “risk” (more on that &lt;a href=&quot;#assets-follow-log-normal-distributions&quot;&gt;later&lt;/a&gt;), but it’s a reasonable proxy that’s easy to work with.&lt;/p&gt;

&lt;p&gt;The US stock market historically has had about a 20% standard deviation,&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; so let’s begin by taking this as our maximum acceptable volatility. Altruists might want to take on &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;more risk than that&lt;/a&gt;, but we need to pick a number, and 20% seems like a good starting point.&lt;/p&gt;

&lt;p&gt;Our goal: build the asset allocation that maximizes the geometric mean of the overall altruistic portfolio while keeping the volatility of our personal portfolio below 20%. We can use leverage and shorts as much as we want, as long as we do not exceed a 20% standard deviation. Within these constraints, the optimal portfolio is:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–76%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;92%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;77%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;57%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;3.10%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;8.87%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(“Return” refers to geometric return.)&lt;/p&gt;

&lt;p&gt;A negative allocation indicates that the asset is sold short. The allocation adds up to more than 100%, which means we must use leverage. Also notice that this allocation has low correlation to the market, but it’s not totally uncorrelated.&lt;/p&gt;

&lt;p&gt;How does this allocation compare to the optimal self-interested asset allocation, where we don’t care about the performance of the altruistic portfolio?&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–36%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;82%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;71%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;52%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;N/A&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;9.22%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.47&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;We can see that, according to this model, altruists are willing to sacrifice some return (from 9.22% down to 8.87%) in order to diversify more. The altruistic asset allocation has a much larger short market position, and somewhat larger long positions for the other assets.&lt;/p&gt;

&lt;p&gt;But the altruistic allocation does not take as large a short position as it could. Qualitatively speaking, the algorithm does not seem willing to sacrifice much expected return to get more diversification. If we optimized with the goal of maximizing the Sharpe ratio of the total altruistic portfolio instead of the geometric return, we would see a much larger short market position (–109%, to be precise).&lt;/p&gt;

&lt;p&gt;If we increase our risk tolerance, the portfolio composition largely does not change, but simply scales up linearly. For example, if we allow up to 40% standard deviation instead of 20%, the allocation to each asset approximately doubles (including doubling the size of the short market position). This is true as long as our portfolio only represents a tiny fraction of the total altruistic portfolio. However, for particularly large investors, increasing the maximum standard deviation will substantially affect portfolio allocation, because the large investor’s allocation will meaningfully influence the overall altruistic allocation.&lt;/p&gt;

&lt;h3 id=&quot;sensitivity-analysis&quot;&gt;Sensitivity analysis&lt;/h3&gt;

&lt;p&gt;The optimal allocation depends on our expectations for asset returns, standard deviations, and correlations. In this section, I will provide some qualitative sensitivity analysis, with full numerical details provided in &lt;a href=&quot;#appendix-b-sensitivity-analysis-of-risk-constrained-allocation&quot;&gt;Appendix B&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I.&lt;/p&gt;

&lt;p&gt;Changing the expected future return of the value, momentum, and trend factors does not much change the optimal allocation. If we assume each of these factors will only beat the market by one percentage point, the resulting optimal allocation looks much the same as the one above.&lt;/p&gt;

&lt;p&gt;Even reducing expected factor performance to 0%, the optimal allocation still substantially allocates to Val/Mom and ManFut. The main difference in this case is that, rather than shorting the market, the optimal allocation includes a small (20%) long market position.&lt;/p&gt;

&lt;p&gt;II.&lt;/p&gt;

&lt;p&gt;How much of the asset allocation is driven by attempting to improve return, and how much by trying to diversify? The answer is obscured by our model’s assumption that most altruistic investors concentrate in a single asset (global equities), &lt;em&gt;and&lt;/em&gt; that that asset has the lowest risk-adjusted return. So moving away from that asset could make sense either as a way to diversify or to increase return.&lt;/p&gt;

&lt;p&gt;We can answer this by assuming other altruists invest all their money in VMOT—the asset with the highest risk-adjusted return (according to the model assumptions)—and use optimal leverage. In this case, our optimal allocation shorts both the market and VMOT, while holding large long positions in Val/Mom and ManFut.&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; Apparently, if the altruistic portfolio uses optimal leverage but doesn’t diversify well, then marginal investors should prioritize increasing diversification. But if the typical altruistic portfolio is both under-levered and overly concentrated, then investors should prioritize increasing return.&lt;/p&gt;

&lt;p&gt;III.&lt;/p&gt;

&lt;p&gt;In the estimates I made for return/standard deviation/correlation, I believe most of the numbers are well-supported. But I’m much more uncertain about what to expect from managed futures. The century-long backtest in &lt;a href=&quot;https://www.trendfollowing.com/whitepaper/Century_Evidence_Trend_Following.pdf&quot;&gt;Hurst et al. (2014)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; found an 11.2% nominal return after estimated fees and transaction costs with a 9.7% standard deviation. If we levered this up to 15% volatility, that would give a historical return of about 15%, or 11% after inflation. Performance this strong seems unlikely to continue into the future, for various reasons.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; In my return assumptions, I substantially discounted this backtested 13% real return down to 3%.&lt;/p&gt;

&lt;p&gt;This might be overly conservative.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; What if we make more optimistic assumptions about managed futures performance? Naturally, if we expect better return from managed futures, then we should give it a larger allocation. But as we increase the expected return of managed futures, its allocation only changes slowly. We need to expect a return as high as 10% before the optimizer allocates more to ManFut than to Val/Mom and VMOT combined. So it seems likely that the “true” optimal allocation includes more value/momentum stocks than managed futures.&lt;/p&gt;

&lt;h3 id=&quot;using-more-traditional-assets&quot;&gt;Using more traditional assets&lt;/h3&gt;

&lt;p&gt;What result do we get if we use a more traditional set of assets? Let’s include US equities, international developed-market equities, commodities, and intermediate-term Treasury bonds.&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt; Assume the overall altruistic portfolio allocates 60% to US equities, 20% to international equities, and 20% to bonds.&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;US&lt;/th&gt;
      &lt;th&gt;International&lt;/th&gt;
      &lt;th&gt;Commodities&lt;/th&gt;
      &lt;th&gt;Bonds&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Geometric Mean&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;–1%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Standard Deviation&lt;/td&gt;
      &lt;td&gt;16%&lt;/td&gt;
      &lt;td&gt;17%&lt;/td&gt;
      &lt;td&gt;16%&lt;/td&gt;
      &lt;td&gt;4%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Correlation matrix:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;US&lt;/th&gt;
      &lt;th&gt;International&lt;/th&gt;
      &lt;th&gt;Commodities&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;International&lt;/td&gt;
      &lt;td&gt;0.9&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Commodities&lt;/td&gt;
      &lt;td&gt;0.3&lt;/td&gt;
      &lt;td&gt;0.4&lt;/td&gt;
      &lt;td&gt; &lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Bonds&lt;/td&gt;
      &lt;td&gt;–0.3&lt;/td&gt;
      &lt;td&gt;–0.3&lt;/td&gt;
      &lt;td&gt;–0.1&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The optimal allocation, subject to a 20% maximum standard deviation:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US Market&lt;/td&gt;
      &lt;td&gt;–250%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;International&lt;/td&gt;
      &lt;td&gt;276%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Commodities&lt;/td&gt;
      &lt;td&gt;–11%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Bonds&lt;/td&gt;
      &lt;td&gt;85%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;1.23%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;11.14%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Altruistic Portfolio&lt;/td&gt;
      &lt;td&gt;0.16&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;When optimizing personal holdings only, with no regard to the external altruistic portfolio:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US Market&lt;/td&gt;
      &lt;td&gt;–228%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;International&lt;/td&gt;
      &lt;td&gt;272%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Commodities&lt;/td&gt;
      &lt;td&gt;–11%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Bonds&lt;/td&gt;
      &lt;td&gt;68%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;N/A&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;11.31%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Altruistic Portfolio&lt;/td&gt;
      &lt;td&gt;0.30&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;As before, when we account for the investments of altruists other than ourselves, we prefer a bigger short market position and a larger position in each of the long assets. But the two portfolios don’t look much different. This suggests that how much we care about other altruists’ investments depends on what assumptions we make about investment performance.&lt;/p&gt;

&lt;p&gt;Compared to the set of assets we looked at before, this more traditional set includes a much larger short position. The optimizer wants to allocate most of the money to international equities. But those are pretty highly correlated to US equities, so the optimizer seeks greater uncorrelated return by using more leverage and a bigger short position.&lt;/p&gt;

&lt;h2 id=&quot;the-returncorrelation-tradeoff&quot;&gt;The return/correlation tradeoff&lt;/h2&gt;

&lt;p&gt;We want to maximize return while minimizing correlation to the typical altruistic portfolio. We are willing to give up some return in order to decrease correlation, or accept higher correlation to earn greater return. How exactly do we trade off between these?&lt;/p&gt;

&lt;p&gt;As an example, let’s say (as before) that global equities have a 3% expected (geometric) return and 16% standard deviation. Suppose we can invest in equities, or we can invest in some other asset with the same volatility but a different return. What return + correlation would make the other asset be as good an investment as global equities?&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/return-correlation-tradeoff-market.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;On the rightmost point of this line, we see that if an asset is perfectly correlated to global equities, then we require it to have the same return. (In effect, this point represents the equity market itself.) At the other extreme, if an asset is perfectly anti-correlated with equities, it can have a return as low as –1.75% and still be worth investing in. In the middle, an asset with zero correlation must earn an expected return of 0.62%.&lt;/p&gt;

&lt;p&gt;This line&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; has a slope of 2.38. That means if we can increase return by (say) 0.238% while increasing correlation by less than 0.1, then we should do it, and vice versa.&lt;/p&gt;

&lt;p&gt;As another example, suppose we can invest in leveraged VMOT&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt;, which has the same volatility as global equities, an expected return of 7%, and a correlation of 0.5. Holding volatility fixed, what return/correlation would another asset need to have to match the expected value of VMOT?&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/return-correlation-tradeoff-vmot.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;For an asset with perfect correlation to global equities, we would demand a return of 8.19%. For an uncorrelated asset, we would accept a return of 5.81%. And for an asset with a perfect negative correlation, the expected return can go as low as 3.44%.&lt;/p&gt;

&lt;h1 id=&quot;impact-of-assumptions&quot;&gt;Impact of assumptions&lt;/h1&gt;

&lt;p&gt;The analysis in this essay made a number of assumptions. How realistic are they? If we change them, how might the results change?&lt;/p&gt;

&lt;h2 id=&quot;leverage-and-shorts-are-free&quot;&gt;Leverage and shorts are free&lt;/h2&gt;

&lt;p&gt;So far, we have assumed that we can get leverage at the risk-free rate and shorts for free. But this is usually not true in practice. Let’s take the more realistic assumptions that leverage costs 1% on top of the risk-free rate, and short positions cost 0.25%.&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This gives the following asset allocation:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–38%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;39%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;115%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;50%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;3.08%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;7.92%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Standard Deviation&lt;/td&gt;
      &lt;td&gt;20%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.32&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Notice that the short market position is smaller than &lt;a href=&quot;#with-a-risk-constraint&quot;&gt;before&lt;/a&gt;: –38% instead of –76%. This portfolio also has a smaller long position (204% instead of 226%). Interestingly, it actually increases the allocation to VMOT. This happens because Market has a stronger correlation to Val/Mom than to VMOT, so when the optimizer reduces the size of the short position, it no longer cancels out the high correlation of Val/Mom, so it prefers to shift some money to an asset with lower correlation.&lt;/p&gt;

&lt;p&gt;For comparison, the optimal self-interested asset allocation looks like this:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;42%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;95%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;42%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;N/A&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;8.33%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Standard Deviation&lt;/td&gt;
      &lt;td&gt;20%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.58&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;This allocation eschews short positions entirely.&lt;/p&gt;

&lt;h2 id=&quot;we-control-only-a-small-portion-of-the-total-altruistic-portfolio&quot;&gt;We control only a small portion of the total altruistic portfolio&lt;/h2&gt;

&lt;p&gt;So far we have assumed that, in total, other value-aligned altruists donate much more money than we do. (Specifically, my calculations assumed we control 1% of the overall portfolio, although the exact number doesn’t matter much.)&lt;/p&gt;

&lt;p&gt;If we control a large fraction of the altruistic portfolio—e.g., if we manage a large foundation, or if we donate to a cause that receives almost no attention—then our portion of the altruistic portfolio should look much more similar to the optimal overall altruistic portfolio. Specifically, that means:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;We should care less about reducing correlation to other altruistic investors.&lt;/li&gt;
  &lt;li&gt;We might not want to get as much leverage as possible, instead getting an amount that’s closer to the overall optimum level.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;assets-follow-log-normal-distributions&quot;&gt;Assets follow log-normal distributions&lt;/h2&gt;

&lt;p&gt;I’m using standard deviation as the measure of risk. That only works if asset prices follow &lt;a href=&quot;https://en.wikipedia.org/wiki/Log-normal_distribution&quot;&gt;log-normal distributions&lt;/a&gt;. In practice, most assets have a left-skewed distribution, meaning they see very bad performance more often than a log-normal distribution would predict.&lt;/p&gt;

&lt;p&gt;Therefore, standard deviation does not properly capture the risk of most assets. You should usually use less leverage than you’d think just from looking at the standard deviation. (In &lt;a href=&quot;https://mdickens.me/2020/06/21/samuelson_share_predict_optimal_leverage/&quot;&gt;a previous essay&lt;/a&gt;, I compared theoretically optimal leverage (over log-normally distributed assets) to the (simulated) historical performance of leveraged portfolios. Over the asset classes I examined, theoretically optimal leverage somewhat overestimated truly optimal leverage, but not by a huge margin.)&lt;/p&gt;

&lt;p&gt;Usually, people don’t actually care about the standard deviation of a portfolio: they care about drawdowns. I like the &lt;a href=&quot;http://www.tangotools.com/ui/ui.htm&quot;&gt;ulcer index&lt;/a&gt; as a way to measure the tendency of a portfolio to experience drawdowns. Unfortunately, we can’t easily run an ulcer index minimization algorithm, because that would require having full price data of each asset, not just the return/volatility/correlations. Even if we do have full price data, the results would be highly sensitive to which particular drawdowns occurred during the sample period.&lt;/p&gt;

&lt;p&gt;We discussed optimal portfolios under leverage constraints and risk (i.e., standard deviation) constraints. But perhaps a more better approach is to use a &lt;em&gt;drawdown constraint&lt;/em&gt;. Assume I can only accept, say, a 90% drawdown on my altruistic investments. What maximum drawdowns do I expect from each asset, and how likely are the different assets to experience big drawdowns at the same time? Then I can use this to derive an optimal portfolio. This is approximately how I decide on an asset allocation in my personal account (although I certainly have a much lower drawdown tolerance than 90% on my personal funds!). I’m not sure how to formalize this, so I just make ad-hoc estimations.&lt;/p&gt;

&lt;p&gt;It should be noted that some assets do not skew left. Importantly, managed futures have historically been right-skewed. For example, &lt;a href=&quot;http://www.chesapeakecapital.com/performance/&quot;&gt;Chesapeake Capital’s managed futures fund&lt;/a&gt;&lt;sup id=&quot;fnref:16:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; experienced a standard deviation of 19% from 1988–2019, as compared to the US stock market’s 14%. But the Chesapeake fund performed substantially better on other measures of risk:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Chesapeake&lt;/th&gt;
      &lt;th&gt;US Equities&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Skewness&lt;/td&gt;
      &lt;td&gt;0.86&lt;/td&gt;
      &lt;td&gt;–0.67&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Max Drawdown&lt;/td&gt;
      &lt;td&gt;32%&lt;/td&gt;
      &lt;td&gt;50%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;http://www.tangotools.com/ui/ui.htm&quot;&gt;Ulcer Index&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;10.5&lt;/td&gt;
      &lt;td&gt;14.2&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(Notably, Chesapeake experienced its maximum drawdown not in 2007–2009 like US equities, but in 2010–2013, while the market was going up.)&lt;/p&gt;

&lt;p&gt;If we expect this to continue in the future, then standard deviation is actually too conservative a measure of risk for assets like this one.&lt;/p&gt;

&lt;h2 id=&quot;asset-correlations-are-stable-over-time&quot;&gt;Asset correlations are stable over time&lt;/h2&gt;

&lt;p&gt;In ordinary times, stocks, commodities, and real estate have fairly low correlations to each other. But in bad times, correlations tend to go to 1. During the recession of 2008, all three asset classes experienced large drawdowns at the same time. So even if we build a diversified portfolio, that diversification might fail us at exactly the wrong time.&lt;/p&gt;

&lt;p&gt;Some assets and strategies, most notably trendfollowing (including managed futures), have actually shown positive return during equity drawdowns.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt; Including such strategies can help alleviate concerns about correlations increasing during market downturns.&lt;/p&gt;

&lt;h2 id=&quot;optimization-techniques-can-predict-optimal-future-asset-allocations&quot;&gt;Optimization techniques can predict optimal future asset allocations&lt;/h2&gt;

&lt;p&gt;This essay implicitly assumed that we can use techniques such as geometric mean maximization to derive optimal asset allocations. But substantial literature has shown that the related technique of mean-variance optimization (MVO) tends to over-fit to past results and does a bad job of estimating forward-looking optimal portfolios—in fact, it maximizes estimation error (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2387669&quot;&gt;Michaud, 1989&lt;/a&gt;&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt;). Geometric mean maximization does a better job than MVO of predicting out-of-sample performance (&lt;a href=&quot;https://web.iese.edu/jestrada/PDF/Research/Refereed/GMM-Extended.pdf&quot;&gt;Estrada, 2010&lt;/a&gt;&lt;sup id=&quot;fnref:6:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;), but still isn’t entirely reliable.&lt;/p&gt;

&lt;p&gt;In my analysis, I used future return projections rather than naively taking historical returns, which should fix the problem of over-fitting to historical performance. But this still requires my estimates to be (reasonably) accurate, which they might not be.&lt;/p&gt;

&lt;p&gt;Some more complex methods, such as Bayesian methods and portfolio resampling, can incorporate uncertainty into the portfolio optimization process (&lt;a href=&quot;https://www.oth-aw.de/files/oth-aw/Aktuelles/Veroeffentlichungen/WEN-Diskussionspapier/WEN-DPs-PDF/DP68.pdf&quot;&gt;Fuhrer and Hock, 2019&lt;/a&gt;&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;). These methods generally favor more diversified portfolios: if you don’t know which assets will perform best in the future, it makes sense to hedge your bets.&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Ultimately, the takeaway from this essay should not be that a particular allocation shown is close to optimal. We should focus on the general principles rather than specific numbers—for example, the principle that altruistic investors should probably short the market, but their short position should be smaller than their long positions.&lt;/p&gt;

&lt;h2 id=&quot;altruists-have-logarithmic-utility&quot;&gt;Altruists have logarithmic utility&lt;/h2&gt;

&lt;p&gt;Investors with logarithmic utility functions want to maximize geometric mean. So by focusing on the geometric mean, I have implicitly assumed that altruists have logarithmic utility. This seems at least approximately true, but it might not be true exactly. If altruists experience more rapidly diminishing marginal utility, then they want to diversify more than this essay suggests, and vice versa.&lt;/p&gt;

&lt;p&gt;(Note that self-interested individuals are &lt;a href=&quot;https://www.aacalc.com/docs/relative_risk_aversion&quot;&gt;probably more risk averse&lt;/a&gt; than logarithmic utility would suggest. It seems plausible that altruistic endeavors on the whole are less risk-averse than individuals.)&lt;/p&gt;

&lt;h1 id=&quot;conclusions&quot;&gt;Conclusions&lt;/h1&gt;

&lt;p&gt;In brief:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If most altruists don’t use enough leverage, or under-allocate to the assets with highest expected return, then the top priority is to get a (leveraged) allocation to the high-return assets.&lt;/li&gt;
  &lt;li&gt;The optimal allocation probably includes a modest short position to reduce correlation to the typical altruistic portfolio.&lt;/li&gt;
  &lt;li&gt;If altruists do use enough leverage but don’t fully diversify, then altruists on the margin should invest in diversifying assets, perhaps exclusively.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, it seems most likely that altruists do under-allocate to &lt;a href=&quot;https://mdickens.me/2020/11/23/uncorrelated_investing/#factor-investing&quot;&gt;the value, momentum, and trend factors&lt;/a&gt;, as well as some high expected return markets like &lt;a href=&quot;http://mdickens.me/2020/11/23/uncorrelated_investing/#can-we-find-uncorrelated-return-within-equities&quot;&gt;emerging-country equities&lt;/a&gt;. This suggests that, on the margin, altruists should hold whichever investments they believe have the highest expected return, while shorting over-subscribed markets such as US equities.&lt;/p&gt;

&lt;p&gt;How has writing this essay changed my mind?&lt;/p&gt;

&lt;p&gt;Previously, I was moderately convinced by the theoretical argument (given &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#uncorrelated-small-donors-are-nearly-risk-neutral&quot;&gt;here&lt;/a&gt;) that altruistic investors should exclusively hold uncorrelated investments. I wasn’t sure this was true in practice, because I suspected that the overhead costs of maintaining market-neutral long/short positions would exceed the benefits.&lt;/p&gt;

&lt;p&gt;The analysis in this essay somewhat justifies my practical concerns, although I was right for the wrong reason. Yes, the costs of leveraged and short positions do make such positions somewhat less appealing, but not much. More importantly, if you believe that most altruistic investors use insufficient leverage or under-weight the highest-return assets, then—at least according to the analysis in this essay—you should primarily invest in the assets with highest expected (geometric) return, even if they’re not uncorrelated with typical investments. In particular, investors who can’t use leverage may simply want to invest in whichever asset has the highest expected return, without regard to its volatility or correlation.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Calculations for this essay were done using &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/mvo.py&quot;&gt;mvo.py&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h1 id=&quot;appendix&quot;&gt;Appendix&lt;/h1&gt;

&lt;h2 id=&quot;appendix-a-derivation-of-meanstandard-deviationcorrelation-estimates&quot;&gt;Appendix A: Derivation of mean/standard deviation/correlation estimates&lt;/h2&gt;

&lt;p&gt;Mean, standard deviation, and correlation for global equities, US equities, international equities, commodities, and bonds are based on forward projections from Research Affiliates’ &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation/&quot;&gt;Asset Allocation Interactive&lt;/a&gt;. Accessed 2020-11-27.&lt;/p&gt;

&lt;p&gt;Means for Val/Mom, VMOT, and ManFut were derived by taking historical return in excess of the market, adding expected market return, subtracting projected costs, and dividing the premium in half based on the assumption that value, momentum, and trendfollowing will work less well in the future than they have in the past.&lt;/p&gt;

&lt;p&gt;Baseline historical returns for Val/Mom and VMOT are taken from a &lt;a href=&quot;https://alphaarchitect.com/2017/06/06/the-value-momentum-trend-philosophy/&quot;&gt;20-year backtest by Alpha Architect&lt;/a&gt; and my own 90-year backtest using the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French Data Library&lt;/a&gt;. Baseline historical returns for ManFut are taken from &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Time-Series-Momentum&quot;&gt;Moskowitz et al. (2012)&lt;/a&gt;&lt;sup id=&quot;fnref:2:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://www.trendfollowing.com/whitepaper/Century_Evidence_Trend_Following.pdf&quot;&gt;Hurst et al. (2014)&lt;/a&gt;&lt;sup id=&quot;fnref:3:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Standard deviations for Val/Mom and VMOT are taken from the same backtests with no adjustments, on the basis that we have no reason to expect future volatility to be higher or lower than it was in the past.&lt;/p&gt;

&lt;p&gt;Standard deviation for ManFut comes from the fact that high-volatility managed futures mutual funds generally target about a 15% standard deviation.&lt;/p&gt;

&lt;p&gt;Correlation matrix was calculated with a backtest using the Ken French Data Library and AQR’s &lt;a href=&quot;https://www.aqr.com/Insights/Datasets/Time-Series-Momentum-Factors-Monthly&quot;&gt;Time Series Momentum: Factors, Monthly&lt;/a&gt; data set, on the assumption that future correlations will resemble historical correlations. I revised some of the correlations slightly downward because my backtest only included US stocks, but an actual portfolio would include globally diversified stocks. Managed futures correlations were validated using the &lt;a href=&quot;http://www.chesapeakecapital.com/performance/&quot;&gt;actual historical performance&lt;/a&gt; of the Chesapeake Capital managed futures fund, 1988–2019.&lt;/p&gt;

&lt;h2 id=&quot;appendix-b-sensitivity-analysis-of-risk-constrained-allocation&quot;&gt;Appendix B: Sensitivity analysis of risk-constrained allocation&lt;/h2&gt;

&lt;h3 id=&quot;variations-on-market--valmom--vmot--manfut&quot;&gt;Variations on Market + Val/Mom + VMOT + ManFut&lt;/h3&gt;

&lt;p&gt;Asset allocation with maximum 40% standard deviation, but return expectations unchanged&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–145%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;171%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;164%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;114%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;3.20%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;14.65%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Asset allocation with 1% excess return for each factor (Val/Mom return = 4%, VMOT return = 3%, ManFut return = 1%)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–62%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;141%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;11%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;68%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;3.05%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;4.70%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.44&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Asset allocation with 0% excess return for each factor (Val/Mom return = 3%, VMOT return = 2%, ManFut return = 0%)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;20%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;62%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;50%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;50%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;3.03%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;3.54%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.72&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Asset allocation when other altruists invest 100% of their money into VMOT&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–45%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;102%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;47%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;64%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;5.92%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;9.16%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to VMOT&lt;/td&gt;
      &lt;td&gt;0.45&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Asset allocation when other altruists invest all their money into VMOT, with optimal leverage (405%), and allowing us to use up to 52% standard deviation (so that we can match 405% leverage)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–189%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;430%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;–121%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;270%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;16.02%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;14.70%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Standard Deviation&lt;/td&gt;
      &lt;td&gt;52%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to VMOT&lt;/td&gt;
      &lt;td&gt;0.62&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Same, but only allowing us a 20% standard deviation&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–73%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;166%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;–47%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;104%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;15.94%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;7.91%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Standard Deviation&lt;/td&gt;
      &lt;td&gt;20.00%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to VMOT&lt;/td&gt;
      &lt;td&gt;0.62&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Asset allocation with more optimistic expectations for managed futures (ManFut return = 5%)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–75%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;104%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;41%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;85%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;3.11%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;10.12%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.20&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Asset allocation with highly optimistic expectations for managed futures (ManFut return = 10%)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–66%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;112%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;–16%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;119%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;3.16%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;14.65%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.14&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Asset allocation using historical performance, net of estimated costs (market return = 5%, Val/Mom return = 12%, VMOT return = 12%, ManFut return = 9%, ManFut standard deviation = 10%)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Market&lt;/td&gt;
      &lt;td&gt;–82%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Val/Mom&lt;/td&gt;
      &lt;td&gt;125%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT&lt;/td&gt;
      &lt;td&gt;13%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;ManFut&lt;/td&gt;
      &lt;td&gt;142%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;5.17%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;22.40%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.19&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h3 id=&quot;variations-on-us--international--commodities--bonds&quot;&gt;Variations on US + International + Commodities + Bonds&lt;/h3&gt;

&lt;p&gt;Asset allocation when US equities return 5%&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US Market&lt;/td&gt;
      &lt;td&gt;82%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;International&lt;/td&gt;
      &lt;td&gt;37%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Commodities&lt;/td&gt;
      &lt;td&gt;11%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Bonds&lt;/td&gt;
      &lt;td&gt;–30%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;4.07%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;6.02%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Altruistic Portfolio&lt;/td&gt;
      &lt;td&gt;0.99&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Asset allocation when bonds return –3%&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US Market&lt;/td&gt;
      &lt;td&gt;–225%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;International&lt;/td&gt;
      &lt;td&gt;263%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Commodities&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Bonds&lt;/td&gt;
      &lt;td&gt;58%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;0.82%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;10.16%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Altruistic Portfolio&lt;/td&gt;
      &lt;td&gt;0.30&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Asset allocation when bonds have 0.0 correlation to each other asset&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Allocation&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US Market&lt;/td&gt;
      &lt;td&gt;–247%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Global ex-US&lt;/td&gt;
      &lt;td&gt;269%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Commodities&lt;/td&gt;
      &lt;td&gt;–7%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Bonds&lt;/td&gt;
      &lt;td&gt;84%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Summary Statistics&lt;/th&gt;
      &lt;th&gt; &lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Total Altruistic Return&lt;/td&gt;
      &lt;td&gt;1.19%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Return&lt;/td&gt;
      &lt;td&gt;10.90%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Personal Correlation to Altruistic Portfolio&lt;/td&gt;
      &lt;td&gt;0.19&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;ol&gt;
  &lt;li&gt;The effective altruism movement, and charity in general, are over-represented in the United States.&lt;/li&gt;
  &lt;li&gt;Most investors over-weight investments within their own country.&lt;/li&gt;
  &lt;li&gt;Some altruistic investors probably hold something like a “standard” 60/40 stocks/bonds portfolio, and others probably hold stocks only. So maybe it averages out to around 80/20.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Haugh (2016). &lt;a href=&quot;http://www.columbia.edu/~mh2078/FoundationsFE/MeanVariance-CAPM.pdf&quot;&gt;Mean-Variance Optimization and the CAPM.&lt;/a&gt; Lecture notes from IEOR E4706: Foundations of Financial Engineering. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:10:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For more on this, see MacLean, Thorp, and Ziemba. &lt;a href=&quot;http://www.edwardothorp.com/books/kelly-capital-growth-investment-criterion/&quot;&gt;The Kelly Capital Growth Investment Criterion: Theory and Practice.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Estrada (2010). &lt;a href=&quot;https://web.iese.edu/jestrada/PDF/Research/Refereed/GMM-Extended.pdf&quot;&gt;Geometric Mean Maximization: An Overlooked Portfolio Approach?&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:6:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;A similar approximation, with more discussion of accuracy, is given by Bernstein and Wilkinson (1997): &lt;a href=&quot;https://www.effisols.com/basics/rebal.pdf&quot;&gt;Diversification, Rebalancing, and the Geometric Mean Frontier.&lt;/a&gt; &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;There are lots of value ETFs and a fair number of momentum ETFs, but most of them aren’t concentrated—they just replicate a market index while only weakly tilting toward value or momentum. Some examples of concentrated ETFs:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;US value: &lt;a href=&quot;https://etf.com/QVAL&quot;&gt;QVAL&lt;/a&gt;, &lt;a href=&quot;https://etf.com/SYLD&quot;&gt;SYLD&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;International value: &lt;a href=&quot;https://etf.com/IVAL&quot;&gt;IVAL&lt;/a&gt;, &lt;a href=&quot;https://etf.com/FYLD&quot;&gt;FYLD&lt;/a&gt;, &lt;a href=&quot;https://etf.com/EYLD&quot;&gt;EYLD&lt;/a&gt;, &lt;a href=&quot;https://etf.com/GVAL&quot;&gt;GVAL&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;US momentum: &lt;a href=&quot;https://etf.com/QMOM&quot;&gt;QMOM&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;International momentum: &lt;a href=&quot;https://etf.com/IMOM&quot;&gt;IMOM&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;US combined value/momentum: &lt;a href=&quot;https://etf.com/VAMO&quot;&gt;VAMO&lt;/a&gt;&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;Disclaimer: I invest in some of these ETFs.&lt;/p&gt;

      &lt;p&gt;It’s also fairly easy to build your own value/momentum stock portfolio if you have access to a good stock screener, although you may end up owing more taxes this way. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Disclaimer: I have money invested in this fund. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:16:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:16:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The backtests I did for this essay attempted to replicate the &lt;a href=&quot;https://etfsite.alphaarchitect.com/wp-content/uploads/2018/05/VMOT_vF.pdf&quot;&gt;methodology of VMOT&lt;/a&gt;. GMOM uses &lt;a href=&quot;https://cambriafunds.com/assets/docs/GMOM_Methodology.pdf&quot;&gt;different methodology&lt;/a&gt; and will probably perform substantially differently, but it’s spiritually similar. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Moskowitz, Ooi, and Pedersen (2012). &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Time-Series-Momentum&quot;&gt;Time Series Momentum.&lt;/a&gt; Backtest data available here: https://www.aqr.com/Insights/Datasets/Time-Series-Momentum-Factors-Monthly &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:2:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Managed futures include embedded leveraged and short positions, and VMOT includes shorts (although it’s possible to implement something like VMOT without shorting). But investors can still buy these assets even if they can’t use any leverage or shorts within their own portfolios. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Dimson, Marsh, and Staunton (2020). &lt;a href=&quot;https://www.credit-suisse.com/media/assets/corporate/docs/about-us/research/publications/credit-suisse-global-investment-returns-yearbook-2020-summary-edition.pdf&quot;&gt;Summary Edition Credit Suisse Global Investment Returns Yearbook 2020.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In practice, it doesn’t make sense to short VMOT while buying Val/Mom, because they’re pretty similar in a way that’s not captured by their correlation. But the optimizer doesn’t know that. &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hurst, Ooi, and Pedersen (2014). &lt;a href=&quot;https://www.trendfollowing.com/whitepaper/Century_Evidence_Trend_Following.pdf&quot;&gt;A Century of Evidence of Trend-Following Investing&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:3:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;These reasons include:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Trendfollowing strategies are more crowded than they were for most of the 20th century, although probably not by enough to reduce the expected return to zero. See AQR (2018), &lt;a href=&quot;https://www.aqr.com/Insights/Research/White-Papers/Trend-Following-in-Focus&quot;&gt;Trend Following in Focus.&lt;/a&gt; Babu et al. (2019), &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/You-Cant-Always-Trend-When-You-Want&quot;&gt;You Can’t Always Trend When You Want&lt;/a&gt; showed that the poor recent performance of trendfollowing is almost entirely explained by the low absolute Sharpe ratios of markets, and is consistent with history. The paper actually found &lt;em&gt;no&lt;/em&gt; reduction in return due to over-saturation, although on priors I would expect at least a little bit of reduction, and this result might be due to luck.&lt;/li&gt;
        &lt;li&gt;The risk-free rate is lower than it used to be, which means trendfollowing strategies earn a much lower return on the cash they hold as collateral.&lt;/li&gt;
        &lt;li&gt;The paper’s methodology in calculating costs might not account for all trading costs. I compared the recent performance of AQR’s trendfollowing backtest against actual managed futures funds (including AQR’s own fund), and the funds performed a few percentage points worse.&lt;/li&gt;
        &lt;li&gt;It is reasonable to assume some degree of mean reversion in backtested results due to data mining, although the high statistical significance of trendfollowing suggests that we should not expect much mean reversion. See Harvey and Liu (2015), &lt;a href=&quot;https://faculty.fuqua.duke.edu/~charvey/Research/Published_Papers/P120_Backtesting.PDF&quot;&gt;Backtesting.&lt;/a&gt;&lt;/li&gt;
        &lt;li&gt;I additionally discount results based on a strong prior belief that any particular strategy cannot produce positive excess return. My subjective expectation for the return of a strategy is a weighted average of the prior expected return of zero and the (unweighted) expected return supported by the evidence. (It would be better to incorporate this Bayesian reasoning directly into the optimization model, as discussed in Fuhrer and Hock (2019)&lt;sup id=&quot;fnref:15:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;, but that’s more complicated.)&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Chesapeake Capital’s managed futures fund &lt;a href=&quot;http://www.chesapeakecapital.com/performance/&quot;&gt;returned&lt;/a&gt; 11% nominal (net) from 1988–2019, with a standard deviation of 19%. If we discount this based on inflation, the risk-free rate, and the assumption that managed futures will work somewhat less well in the future than it has in the past, then we get roughly a 3–5% real return. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Performance expectations are taken from the spreadsheet downloaded from Research Affiliates’ &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation/&quot;&gt;Asset Allocation Interactive&lt;/a&gt;. Accessed 2020-11-27. The names of the assets in my analysis correspond to Research Affiliates’ names as follows:&lt;/p&gt;

      &lt;p&gt;US = US Large
International = EAFE
Commodities = Commodities
Bonds = US Treasury Intermediate &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This asset allocation is an educated guess based on: &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This is not actually a straight line, but it’s pretty close. We can see this by looking at the formula we use to approximate the geometric mean: the geometric mean over two assets is a linear function of their correlation, and an almost-but-not-quite-linear function of expected return of one of the assets. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We add leverage such that VMOT’s standard deviation matches the market’s. &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Interactive Brokers &lt;a href=&quot;https://www.interactivebrokers.com/en/index.php?f=46376&quot;&gt;charges&lt;/a&gt; 1% over the risk-free rate on margin loans between $100,000 and $1 million, and lower rates above $1 million. Shorting widely-traded index funds (such as &lt;a href=&quot;https://etf.com/SPY&quot;&gt;SPY&lt;/a&gt;) generally costs 0.25%. Shorting more esoteric funds costs more, but the optimal portfolios in this model mostly just short the market, so we don’t need to worry about the costs of shorting the other assets. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;AQR (2018). &lt;a href=&quot;https://images.aqr.com/-/media/AQR/Documents/Alternative-Thinking/Alternative-thinking-3Q18.pdf&quot;&gt;It Was the Worst of Times: Diversification During a Century of Drawdowns.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Michaud (1989). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2387669&quot;&gt;The Markowitz Optimization Enigma: Is ‘Optimized’ Optimal?&lt;/a&gt; &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Fuhrer and Hock (2019). &lt;a href=&quot;https://www.oth-aw.de/files/oth-aw/Aktuelles/Veroeffentlichungen/WEN-Diskussionspapier/WEN-DPs-PDF/DP68.pdf&quot;&gt;Uncertainty in the Black-Litterman Model - A Practical Note.&lt;/a&gt; &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:15:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I am not confident about this claim. I’ve read that’s true, e.g., in Fuhrer and Hock (2019) cited above. But when I tried to replicate this result, I found that portfolio resampling did not change the optimal allocation. I might have been doing something wrong in my replication. &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Uncorrelated Investments for Altruists</title>
				<pubDate>Mon, 23 Nov 2020 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2020/11/23/uncorrelated_investing/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/11/23/uncorrelated_investing/</guid>
                <description>
                  
                  
                  
                  &lt;h1 id=&quot;summary&quot;&gt;Summary&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;Altruists care a lot about finding investments with low correlation to other altruists’ portfolios. &lt;a href=&quot;#why-uncorrelated-investments&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Altruists can probably decrease correlation by investing in under-utilized asset classes, most notably (1) commodities and (2) long/short equity indexes. &lt;a href=&quot;#are-there-uncorrelated-asset-classes&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;But there’s a better way to decrease correlation. Some factors—such as value, momentum, and trend—have been shown to predict investment performance. &lt;a href=&quot;#factor-investing&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;We don’t know for sure if these factors will continue to work in the future, but evidence suggests that they will. &lt;a href=&quot;#will-these-factors-continue-to-work&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;There are publicly-available funds that provide concentrated factor exposure. &lt;a href=&quot;#funds-i-like&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Factor investing is psychologically painful for most people. We can do a few things to mitigate this, but ultimately it seems unavoidable, and this will prevent most people from investing in value/momentum/trend in practice. &lt;a href=&quot;#behavioral-concerns&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This should not be taken as investment advice. This content is for informational purposes only. Any given portfolio results are hypothetical and do not represent returns achieved by an actual investor.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/zKFcC87iZXvk7yhbP/uncorrelated-investments-for-altruists&quot;&gt;Effective Altruism Forum.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Last updated 2022-05-31.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h1 id=&quot;contents&quot;&gt;Contents&lt;/h1&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#why-uncorrelated-investments&quot; id=&quot;markdown-toc-why-uncorrelated-investments&quot;&gt;Why uncorrelated investments?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#how-valuable-are-uncorrelated-investments&quot; id=&quot;markdown-toc-how-valuable-are-uncorrelated-investments&quot;&gt;How valuable are uncorrelated investments?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#are-there-uncorrelated-asset-classes&quot; id=&quot;markdown-toc-are-there-uncorrelated-asset-classes&quot;&gt;Are there uncorrelated asset classes?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#real-estate&quot; id=&quot;markdown-toc-real-estate&quot;&gt;Real estate&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#commodities&quot; id=&quot;markdown-toc-commodities&quot;&gt;Commodities&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#startups&quot; id=&quot;markdown-toc-startups&quot;&gt;Startups&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#can-we-find-uncorrelated-return-within-equities&quot; id=&quot;markdown-toc-can-we-find-uncorrelated-return-within-equities&quot;&gt;Can we find uncorrelated return within equities?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#factor-investing&quot; id=&quot;markdown-toc-factor-investing&quot;&gt;Factor investing&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#what-are-the-factors&quot; id=&quot;markdown-toc-what-are-the-factors&quot;&gt;What are the factors?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#evidence-on-factor-investing&quot; id=&quot;markdown-toc-evidence-on-factor-investing&quot;&gt;Evidence on factor investing&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#why-have-these-factors-persisted&quot; id=&quot;markdown-toc-why-have-these-factors-persisted&quot;&gt;Why have these factors persisted?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#will-these-factors-continue-to-work&quot; id=&quot;markdown-toc-will-these-factors-continue-to-work&quot;&gt;Will these factors continue to work?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#factor-correlations&quot; id=&quot;markdown-toc-factor-correlations&quot;&gt;Factor correlations&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#if-we-dont-short-individual-assets&quot; id=&quot;markdown-toc-if-we-dont-short-individual-assets&quot;&gt;If we don’t short individual assets&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#optimized-historical-portfolios&quot; id=&quot;markdown-toc-optimized-historical-portfolios&quot;&gt;Optimized historical portfolios&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#uncorrelated-investments-in-practice&quot; id=&quot;markdown-toc-uncorrelated-investments-in-practice&quot;&gt;Uncorrelated investments in practice&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#funds-i-like&quot; id=&quot;markdown-toc-funds-i-like&quot;&gt;Funds I like&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#behavioral-concerns&quot; id=&quot;markdown-toc-behavioral-concerns&quot;&gt;Behavioral concerns&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix&quot; id=&quot;markdown-toc-appendix&quot;&gt;Appendix&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-a-factors-other-than-valuemomentumtrend&quot; id=&quot;markdown-toc-appendix-a-factors-other-than-valuemomentumtrend&quot;&gt;Appendix A: Factors other than value/momentum/trend&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-b-reducing-correlation-to-income&quot; id=&quot;markdown-toc-appendix-b-reducing-correlation-to-income&quot;&gt;Appendix B: Reducing correlation to income&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-c-changelog&quot; id=&quot;markdown-toc-appendix-c-changelog&quot;&gt;Appendix C: Changelog&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;why-uncorrelated-investments&quot;&gt;Why uncorrelated investments?&lt;/h1&gt;

&lt;p&gt;As a philanthropist, if you have some investments that you plan to donate eventually, then you should be somewhat risk-averse with respect to those investments. When the stock and bond markets go up, you have more money, and you can donate more to charity. But other altruists also have money, and charities receive more funding, so your money isn’t as valuable. Conversely, when markets go down, you have less money to donate at the exact time when charities need funding the most.&lt;/p&gt;

&lt;p&gt;You can avoid this by buying uncorrelated investments. If you can find a way to invest that has a low (or even negative) correlation to the overall altruistic portfolio, then there will be times when most altruists lose money, but you gain money, so you can make up for other altruists’ losses. And there will be times where you lose money and others gain money, but you’re not bothered by losing money because other altruists can now provide more funding. (And of course sometimes you’ll gain or lose money at the same time as other altruists, but not as often as if you held stocks and bonds.)&lt;/p&gt;

&lt;p&gt;So if you can find an uncorrelated investment with positive expected return, that would be really valuable.&lt;/p&gt;

&lt;h2 id=&quot;how-valuable-are-uncorrelated-investments&quot;&gt;How valuable are uncorrelated investments?&lt;/h2&gt;

&lt;p&gt;We can use a simple model to get a sense of how much uncorrelated investments matter. Let’s start with the following assumptions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Charitable causes have logarithmic utility of money.&lt;/li&gt;
  &lt;li&gt;My assets only represent a small percentage of the total altruistic portfolio.&lt;/li&gt;
  &lt;li&gt;Altruists all invest in a standard stock/bond portfolio, and use the optimal amount of &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;leverage&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;The unleveraged portfolio has a long-run expected return of 3%, and a standard deviation of 12%.&lt;/li&gt;
  &lt;li&gt;I have the ability to invest in an alternative asset with a modest 1% return and 12% standard deviation, but zero correlation to the standard stock/bond portfolio.&lt;/li&gt;
  &lt;li&gt;Asset returns follow a &lt;a href=&quot;https://en.wikipedia.org/wiki/Log-normal_distribution&quot;&gt;log-normal distribution&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The easiest way to think about the value of my portfolio is in terms of the &lt;a href=&quot;https://www.investopedia.com/terms/c/certaintyequivalent.asp&quot;&gt;certainty-equivalent interest rate&lt;/a&gt;: what guaranteed interest rate would I consider equally as good as this risky portfolio?&lt;/p&gt;

&lt;p&gt;Under these assumptions, if I take my portfolio out of stocks/bonds and invest it all in the alternative asset, the new portfolio has a certainty-equivalent excess return of 0.7%.&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; If I have $1 million, that means switching to the alternative investment is as good as getting a guaranteed $7,000 per year—and the amount increases as my portfolio grows.&lt;/p&gt;

&lt;p&gt;If the alternative asset earns a 3% expected return instead of 1%, switching my portfolio to the alternative asset provides a certainty-equivalent return of 5.9% (!).&lt;/p&gt;

&lt;p&gt;And that’s assuming I keep using the same amount of leverage. If I lever up my investment to the point where the overall altruistic portfolio allocates the correct amount to this alternative asset, then I get a certainty-equivalent return of over 300%. (But this requires that I use on the order of 400:1 leverage, which isn’t even remotely realistic.)&lt;/p&gt;

&lt;p&gt;This is just an illustrative example, and it’s probably not accurate. The key takeaway is that allocating to uncorrelated investments can potentially provide a lot of value to a portfolio.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update 2020-12-14&lt;/strong&gt;: Since originally writing this, I have somewhat changed my views on the significance of uncorrelated investing. I believe that for most altruistic investors in practice, earning higher expected return is more important than decreasing correlation. I explain why in &lt;a href=&quot;https://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints&quot;&gt;Asset Allocation and Leverage for Altruists with Constraints&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I still believe that, if an altruistic investor can can decrease correlation without sacrificing expected return, then they should. And I still agree with what I wrote in this essay regarding the diversification benefits of different types of investments.&lt;/p&gt;

&lt;h1 id=&quot;are-there-uncorrelated-asset-classes&quot;&gt;Are there uncorrelated asset classes?&lt;/h1&gt;

&lt;p&gt;Investors can choose from essentially four asset classes: stocks (a.k.a. equities), bonds, real estate, and commodities. Most people already hold stocks and bonds. Can we get uncorrelated return with real estate and commodities? And can we find under-utilized investments within equities?&lt;/p&gt;

&lt;p&gt;(I wrote about most of these in &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#donors-can-decrease-correlation&quot;&gt;a previous essay&lt;/a&gt;. Here, I will go into a little more detail.)&lt;/p&gt;

&lt;h2 id=&quot;real-estate&quot;&gt;Real estate&lt;/h2&gt;

&lt;p&gt;For most individual investors, it doesn’t make sense to buy real estate as an investment because &lt;a href=&quot;https://mdickens.me/2020/10/18/risk_of_concentrating/&quot;&gt;it exposes them to too much undiversified risk&lt;/a&gt;. But it might make sense for altruists to buy real estate as part of their altruistic investments—this could help diversify the overall altruistic portfolio. Still, there are a few problems with this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Most people aren’t rich enough to buy a real estate property as an investment. Even fewer people are rich enough to buy real estate as part of their &lt;em&gt;altruistic&lt;/em&gt; investments, i.e., after already investing enough personal money to ensure their financial security.&lt;/li&gt;
  &lt;li&gt;If altruists do buy real estate, they should make sure to diversify globally. People who own investment properties tend to buy them in the same city where they live, which makes the property much easier to manage. If effective altruists as a group do this, they will overly concentrate their investments in a few cities where EAs are over-represented, such as Oxford and Berkeley.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can gain exposure to a basket of real estate investments by buying a &lt;a href=&quot;https://en.wikipedia.org/wiki/Real_estate_investment_trust&quot;&gt;REIT&lt;/a&gt;, which is a publicly-traded company that earns money primarily by owning real estate. But &lt;a href=&quot;http://thebamalliance.com/blog/the-role-of-reits-in-a-diversified-portfolio/&quot;&gt;REITs behave more like stocks than like real estate&lt;/a&gt;,&lt;sup id=&quot;fnref:35&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:35&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; so they don’t provide the same diversification benefits as direct investments in properties would.&lt;/p&gt;

&lt;p&gt;Real estate makes sense as an investment for the extremely wealthy (for example, university endowments often own a lot of land). But it seems like altruists would have a hard time diversifying with real estate, even in the aggregate.&lt;/p&gt;

&lt;h2 id=&quot;commodities&quot;&gt;Commodities&lt;/h2&gt;

&lt;p&gt;Over the past century and a half, commodities have on average produced a positive return on top of the risk-free rate, with fairly low correlation to stocks (r=0.23) and essentially zero correlation to bonds &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2856435&quot;&gt;(Levine et al., 2016)&lt;/a&gt;.&lt;sup id=&quot;fnref:31&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:31&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:32&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:32&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; Commodities aren’t &lt;em&gt;perfectly&lt;/em&gt; uncorrelated with a conventional stock/bond portfolio, but they do provide some diversifying power.&lt;sup id=&quot;fnref:34&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:34&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;And unlike real estate, commodities are easy to buy. Various ETFs and mutual funds provide exposure to the broad commodities market (as of this writing, the most popular such ETF is &lt;a href=&quot;https://www.etf.com/PDBC&quot;&gt;PDBC&lt;/a&gt;). Commodity funds generally charge higher fees than equity or bond funds, but probably not enough to destroy the commodity return premium.&lt;/p&gt;

&lt;h2 id=&quot;startups&quot;&gt;Startups&lt;/h2&gt;

&lt;p&gt;As suggested by &lt;a href=&quot;https://rationalaltruist.com/2013/02/28/risk-aversion-and-investment-for-altruists/&quot;&gt;Paul Christiano&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[I]f you start or invest in a small company, your payoff will depend on that company’s performance (which is typically quite risky but only weakly correlated with the market). […] This special case is only possible because the entrepreneur or investor is putting in their own effort, and moral hazard makes it hard to smooth out all of the risk across a larger pool (though VC funds will invest in many startups). You shouldn’t expect to find a similar situation in investments, except when you are providing insight which you trust but the rest of the market does not (thereby preventing you from insuring against your risk).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Individual companies still tend to move with the market at least somewhat, so for a truly uncorrelated approach, you could invest in the startup while shorting the market.&lt;/p&gt;

&lt;p&gt;But note that, at least according to basic theory, you want the overall altruistic portfolio to hold stock in a startup in proportion to the global market portfolio. So if the startup represents 0.0000001% of global wealth, then value-aligned altruists should invest 0.0000001% in that company—which would probably be no more than about $10,000.&lt;/p&gt;

&lt;h2 id=&quot;can-we-find-uncorrelated-return-within-equities&quot;&gt;Can we find uncorrelated return within equities?&lt;/h2&gt;

&lt;p&gt;Thanks to &lt;a href=&quot;https://alphaarchitect.com/2019/12/05/global-impact-of-investor-home-country-bias/&quot;&gt;home country bias&lt;/a&gt;, most investors put too much money in their home countries and not enough into international equities. Effective altruists are over-represented in a few countries like the United States, the UK, and Switzerland. So there might be an opportunity to diversify by over-weighting countries where not many effective altruists live. For example, you could buy an emerging markets index fund like &lt;a href=&quot;https://www.etf.com/VWO&quot;&gt;VWO&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But foreign equities are still 80-90% correlated with the domestic market. You can make truly uncorrelated portfolio with a long/short portfolio: buy (go long) emerging markets while &lt;a href=&quot;https://en.wikipedia.org/wiki/Short_(finance)&quot;&gt;shorting&lt;/a&gt; domestic equities.&lt;/p&gt;

&lt;p&gt;In general, this strategy has an expected return of zero: in the long run, all equity markets should in expectation behave the same on a risk-adjusted basis. But even a zero-expected-return investment might still be a good idea if it decreases the volatility of the overall altruistic portfolio.&lt;/p&gt;

&lt;p&gt;But we might have reason to expect some regions of the world to perform better than others. We can fairly reliably predict returns over the span of a decade or so using &lt;a href=&quot;https://awealthofcommonsense.com/2016/09/the-john-bogle-expected-return-formula/&quot;&gt;John Bogle’s expected return formula&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Future Market Returns = Dividend Yield + Earnings Growth +/- Change in P/E Ratio&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This formula has a strong theoretical justification, and it has worked well at predicting future returns over the past century (&lt;a href=&quot;https://jpm.pm-research.com/content/42/1/119&quot;&gt;Bogle and Nolan, 2015&lt;/a&gt;&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;https://jpm.pm-research.com/content/early/2020/05/16/jpm.2020.1.160&quot;&gt;Radha, 2020&lt;/a&gt;&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;). I won’t go into detail about why this formula makes sense, but for a good layperson’s explanation, see Colby Davis’ &lt;a href=&quot;https://rhsfinancial.com/2020/08/11/does-reality-matter-stock-market-2020/&quot;&gt;Does Reality Even Matter Anymore? Part 1&lt;/a&gt; and &lt;a href=&quot;https://rhsfinancial.com/2020/08/19/does-reality-matter-part2/&quot;&gt;Part 2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Research Affiliates’ &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation&quot;&gt;Asset Allocation Interactive&lt;/a&gt; uses similar methodology to project the 10-year expected return and volatility for a wide variety of asset classes. As of October 2020, it predicts the following expected real returns for various regions:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;US&lt;/th&gt;
      &lt;th&gt;Europe&lt;/th&gt;
      &lt;th&gt;Japan&lt;/th&gt;
      &lt;th&gt;Emerging Markets&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Return&lt;/td&gt;
      &lt;td&gt;0.2%&lt;/td&gt;
      &lt;td&gt;5.2%&lt;/td&gt;
      &lt;td&gt;4.5%&lt;/td&gt;
      &lt;td&gt;6.9%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Standard Deviation&lt;/td&gt;
      &lt;td&gt;15%&lt;/td&gt;
      &lt;td&gt;19%&lt;/td&gt;
      &lt;td&gt;18%&lt;/td&gt;
      &lt;td&gt;21%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;According to the Bogle return formula, we can get something like a 6% expected return by going long emerging markets and short US equities (probably closer to 4-5% after costs). And if altruistic investors in general under-weight emerging markets (which I expect they do), then we’re diversifying by doing so.&lt;sup id=&quot;fnref:44&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:44&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;We can do a backtest to get a sense of how this regional long/short strategy might look. Over the past 30 years, the US stock market has outperformed the rest of the world. What would have happened if we had bought US equities while shorting international equities?&lt;/p&gt;

&lt;p&gt;This backtest doesn’t really tell us how a regional long/short strategy might perform in the future because (1) we’re cheating by picking the best region in hindsight, and (2) emerging market equities tend to be more volatile than developed markets like the US. But it at least tells us how this general type of strategy might perform. (These return figures are nominal, not real, and don’t include fees or transaction costs.)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;US&lt;/th&gt;
      &lt;th&gt;International&lt;/th&gt;
      &lt;th&gt;Long US, Short International&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Return&lt;/td&gt;
      &lt;td&gt;10.2%&lt;/td&gt;
      &lt;td&gt;5.6%&lt;/td&gt;
      &lt;td&gt;3.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Standard Deviation&lt;/td&gt;
      &lt;td&gt;14.5%&lt;/td&gt;
      &lt;td&gt;16.4%&lt;/td&gt;
      &lt;td&gt;11.3%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;If the Bogle return formula is to be believed, investors can earn a positive (in expectation) and uncorrelated return by buying emerging markets while shorting US stocks. Even if we believe the US and emerging markets have similar expected (risk-adjusted) return, most investors under-weight emerging markets, so holding a long emerging/short US equity portfolio could improve the overall altruistic portfolio. (But it could be unpleasant for the investor who knowingly holds a risky investment with zero expected return.)&lt;/p&gt;

&lt;h1 id=&quot;factor-investing&quot;&gt;Factor investing&lt;/h1&gt;

&lt;p&gt;Some of the asset class strategies identified in the previous section look at least somewhat promising. But we can probably do better.&lt;/p&gt;

&lt;p&gt;The best way to get uncorrelated positive return might be to find a gifted investment manager who can beat the market, and then also short the market (or find a manager who gets positive performance with net zero market exposure). But among well-established managers, the best ones don’t want to take your money—they can only maintain their high returns if they keep their list of clients short. (The best investment fund in the world, Renaissance Medallion Fund, caps the fund at $10 billion and doesn’t accept any outside money.) And it’s hard to evaluate managers who don’t have an established track record. I don’t know how a manager without a long track record could produce compelling evidence that they have enough skill to beat the market.&lt;sup id=&quot;fnref:28&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:28&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; And if a manager uses a discretionary process, even if they’ve made money in the past, I have no way to know if they will continue to outperform.&lt;sup id=&quot;fnref:40&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:40&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;I prefer to focus on strategies that &lt;em&gt;demonstrably&lt;/em&gt; work. And the current state of the evidence suggests that the most demonstrably effective way to get uncorrelated return is via factor investing.&lt;/p&gt;

&lt;p&gt;What is factor investing?&lt;/p&gt;

&lt;p&gt;First, some background. In the sixties, some very smart economists came up with the &lt;a href=&quot;https://en.wikipedia.org/wiki/Capital_asset_pricing_model&quot;&gt;Capital Asset Pricing Model&lt;/a&gt; (CAPM). According to the original formulation of this theory, a stock can only earn a higher expected return if it has greater exposure to market movements. This exposure is called its &lt;strong&gt;market beta&lt;/strong&gt;. When the market moves up or down, a high-beta stock should move up or down even more. And, importantly, there should be no way to predict a stock’s return other than by its exposure to market beta.&lt;/p&gt;

&lt;p&gt;But there’s a problem with this theory. In a seminal 1992 paper, Eugene Fama and Kenneth French &lt;a href=&quot;http://www.business.unr.edu/faculty/liuc/files/badm742/fama_french_1992.pdf&quot;&gt;found&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; that they could predict stocks’ returns much more effectively using three factors: market beta, size, and book to market ratio (“value”). In their sample, small companies tended to perform better than CAPM predicted, and so did companies with high book to market ratios. The new Fama-French three-factor model had much greater explanatory power than the CAPM one-factor model.&lt;/p&gt;

&lt;p&gt;Later, researchers gathered more evidence on the size and value factors and found that they replicated well out of sample. Additional factors were identified, such as momentum (&lt;a href=&quot;http://www.business.unr.edu/faculty/liuc/files/BADM742/Jegadeesh_Titman_1993.pdf&quot;&gt;Jegadeesh and Titman, 1993&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;) and profitability (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=570343&quot;&gt;Fama and French, 2004&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;). Soon, hundreds more papers were published in the quest to identify more factors. Many of these supposed factors failed to replicate, but a few stand out.&lt;/p&gt;

&lt;h2 id=&quot;what-are-the-factors&quot;&gt;What are the factors?&lt;/h2&gt;

&lt;p&gt;Berkin and Swedroe’s book, &lt;a href=&quot;https://www.amazon.com/Your-Complete-Guide-Factor-Based-Investing/dp/0692783652&quot;&gt;Your Complete Guide to Factor-Based Investing&lt;/a&gt;, reviews the literature and presents the factors that the authors believe have the strongest supporting evidence and might be worth investing in. To earn its place in the book, a factor must meet five criteria:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Persistent — It holds across long periods of time and different economic regimes.&lt;/p&gt;

  &lt;p&gt;Pervasive — It holds across countries, regions, sectors, and even asset classes.&lt;/p&gt;

  &lt;p&gt;Robust — It holds for various definitions (for example, there is a value premium whether it is measured by price-to-book, earnings, cash flow, or sales).&lt;/p&gt;

  &lt;p&gt;Investable — It holds up not just on paper, but also after considering actual implementation issues, such as trading costs.&lt;/p&gt;

  &lt;p&gt;Intuitive — There are logical risk-based or behavioral-based explanations for its premium and why it should continue to exist.&lt;sup id=&quot;fnref:45&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:45&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a &lt;a href=&quot;https://alphaarchitect.com/2017/02/01/berkin-and-swedroes-factor-based-investing-book/&quot;&gt;book review&lt;/a&gt;, Wesley Gray lists the factors from the book with his own assessment of the quality of evidence behind each, ranging from gold to bronze:&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:50&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:50&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Market beta: &lt;strong&gt;Gold&lt;/strong&gt;. Pure risk. Legit.&lt;/p&gt;

  &lt;p&gt;Term Factor: &lt;strong&gt;Gold&lt;/strong&gt;. Pure risk. Legit.&lt;/p&gt;

  &lt;p&gt;Momentum Factor: &lt;strong&gt;Gold.&lt;/strong&gt; Part risk, part systematic mispricing.&lt;/p&gt;

  &lt;p&gt;Value Factor: &lt;strong&gt;Gold.&lt;/strong&gt; Part risk, part systematic mispricing.&lt;/p&gt;

  &lt;p&gt;Trend Following Factor: &lt;strong&gt;Gold.&lt;/strong&gt; Part risk, part systematic mispricing.&lt;/p&gt;

  &lt;p&gt;Size Factor: &lt;strong&gt;Silver.&lt;/strong&gt; Part risk, maybe part systematic mispricing.&lt;/p&gt;

  &lt;p&gt;Carry Factor: &lt;strong&gt;Silver.&lt;/strong&gt; Mostly risk, potentially some systematic mispricing.&lt;/p&gt;

  &lt;p&gt;Profitability &amp;amp; Quality Factors: &lt;strong&gt;Bronze.&lt;/strong&gt; Systematic mispricing? Risk? Unclear. Best used as supplement for size/value factors.&lt;/p&gt;

  &lt;p&gt;Low-volatility Factor: &lt;strong&gt;Bronze.&lt;/strong&gt; Systematic mispricing? Best used as supplement for size/value/momentum factors.&lt;/p&gt;

  &lt;p&gt;Default (or credit) Factor: &lt;strong&gt;Bronze.&lt;/strong&gt; Systematic mispricing? Doesn’t pay to own credit risk, historically.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(Gray’s &lt;a href=&quot;https://alphaarchitect.com/2017/02/01/berkin-and-swedroes-factor-based-investing-book/&quot;&gt;review&lt;/a&gt; also includes a list of links for further reading under each factor.)&lt;/p&gt;

&lt;p&gt;We already talked about market beta as part of CAPM. The term factor represents the fact that long-term bonds tend to outperform short-term bonds. Almost all investors already expose themselves to these first two factors simply by buying stocks and bonds. If we want uncorrelated investments, we should avoid the market beta and term factors.&lt;/p&gt;

&lt;p&gt;Let’s look at the other three factors that Gray identifies as “gold”: momentum, value, and trend.&lt;/p&gt;

&lt;p&gt;In brief, what do these factors mean?&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Value&lt;/strong&gt;: Cheap stocks (as measured by the price-to-book ratio, price-to-earnings, or some other similar metric) tend to outperform expensive stocks.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Momentum&lt;/strong&gt;: Assets (such as stocks, bonds, or commodities) with good performance over the past 6-12 months relative to other assets tend to continue to outperform.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Trend&lt;/strong&gt;: Assets with good &lt;em&gt;absolute&lt;/em&gt; performance over the past 6-12 months tend to continue to perform well.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(To illustrate the difference between momentum and trend: suppose stock A is up 50% over the past 12 months, and stock B is up 10%. The momentum factor would buy stock A and &lt;a href=&quot;https://en.wikipedia.org/wiki/Short_(finance)&quot;&gt;short&lt;/a&gt; stock B, because stock A went up more. But the trend factor would buy both, because both went up in absolute terms. Momentum and trend sound similar, but they tend to operate differently enough that they’re worth counting as separate factors—see &lt;a href=&quot;#factor-correlations&quot;&gt;Factor correlations&lt;/a&gt; for a more quantitative justification.)&lt;/p&gt;

&lt;p&gt;In practice, investors can use these factors to outperform the market by buying assets that have low valuations/high momentum/positive trend (and maybe also by short-selling expensive/low-momentum/negative-trend assets).&lt;/p&gt;

&lt;p&gt;How do we know these factors are real, and not the result of data mining?&lt;/p&gt;

&lt;p&gt;In the rest of this section, I will present a series of research papers on the value, momentum, and trend factors, and explain their significance. If you’re already familiar with the research or if you just want to take my word for it that factor investing works, feel free to skip ahead to &lt;a href=&quot;#uncorrelated-investments-in-practice&quot;&gt;Uncorrelated investments in practice&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;evidence-on-factor-investing&quot;&gt;Evidence on factor investing&lt;/h2&gt;

&lt;p&gt;In 2013, Asness, Moskowitz, and Pedersen published an impressive paper titled &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf&quot;&gt;Value and Momentum Everywhere&lt;/a&gt;.&lt;sup id=&quot;fnref:51&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:51&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; According to this paper, value stocks and momentum stocks outperformed in four different equity markets and in country equity indexes, government bonds, currencies, and commodities. It also showed that the performance of value and momentum could not be explained by macroeconomic risk, liquidity risk, or transaction costs.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2292544&quot;&gt;Geczy and Samonov (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt;, &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3325720&quot;&gt;Baltussen, Swinkels, and van Vliet (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;, and &lt;a href=&quot;https://www.twocenturies.com/blog/2020/5/11/value-investing-even-deeper-history&quot;&gt;Samonov (2020)&lt;/a&gt; assembled much longer historical data sets to show that momentum and value have worked for two centuries.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2961979&quot;&gt;Hou, Xue, and Zhang (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:52&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:52&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt; attempted to replicate 447 supposed market anomalies and found that most did not replicate, but the value and momentum factors appeared robust. (The paper did not examine the trend factor.) In an interview, when asked which factors he expected to still work in 50 years’ time, Zhang said,&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;I can only speculate what would happen in 2067. I would bet on value, momentum, investment, and [return on equity] (as well as different combinations of these variables).&lt;sup id=&quot;fnref:53&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:53&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Fact-Fiction-and-Momentum-Investing&quot;&gt;Fact, Fiction and Momentum Investing&lt;/a&gt;&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt; lists some common arguments for why momentum supposedly doesn’t work in practice, and references research explaining why those arguments are wrong. For instance, it cites &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2294498&quot;&gt;Trading Costs of Asset Pricing Anomalies&lt;/a&gt;&lt;sup id=&quot;fnref:55&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:55&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;, which used real-world data to show that value and momentum still outperform the market after transaction costs. (This study was later extended by Frazzini et al., &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3229719&quot;&gt;Trading Costs&lt;/a&gt;&lt;sup id=&quot;fnref:56&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:56&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;24&lt;/a&gt;&lt;/sup&gt;, to look at $1.7 trillion of real-world trades, which found “actual trading costs to be an order of magnitude smaller than previous studies suggest”.)&lt;/p&gt;

&lt;p&gt;What about the trend factor? Moskowitz, Ooi, and Pedersen’s paper &lt;a href=&quot;http://docs.lhpedersen.com/TimeSeriesMomentum.pdf&quot;&gt;Time Series Momentum&lt;/a&gt;&lt;sup id=&quot;fnref:54&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:54&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt; observed a highly significant (p&amp;lt;0.00001) trendfollowing factor “in equity index, currency, commodity, and bond futures for each of the 58 liquid instruments we consider”, even after controlling for the value and momentum factors. Later papers replicated this finding &lt;a href=&quot;https://images.aqr.com/-/media/AQR/Documents/Insights/Journal-Article/AQR-JPM-Fall-2017.pdf&quot;&gt;over a full century of data&lt;/a&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;26&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3386035&quot;&gt;across 82 previously unstudied securities&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;27&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h2 id=&quot;why-have-these-factors-persisted&quot;&gt;Why have these factors persisted?&lt;/h2&gt;

&lt;p&gt;If investors can increase their performance by investing in these factors, why doesn’t everyone invest in them? Why haven’t they been arbitraged away? It’s not entirely clear, but these factors likely persist due to a combination of risk and common behavioral biases (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=227016&quot;&gt;Lakonishok et al., 1993&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;28&lt;/a&gt;&lt;/sup&gt;), and limitations on sophisticated investors that prevent them from correcting this mispricing (&lt;a href=&quot;http://faculty.tuck.dartmouth.edu/images/uploads/faculty/jonathan-lewellen/Institutions.pdf&quot;&gt;Lewellen, 2011&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;29&lt;/a&gt;&lt;/sup&gt;; &lt;a href=&quot;http://pages.stern.nyu.edu/~cedmond/phd/Shleifer%20Vishny%20JF%201997.pdf&quot;&gt;Shleifer and Vishny, 1997&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;30&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;I won’t go into too much detail on the risk vs. behavioral bias question or on the details of the theories behind each. &lt;a href=&quot;https://www.q-group.org/wp-content/uploads/2016/09/Ilmanen_slides.pdf&quot;&gt;This slide deck&lt;/a&gt; summarizes more research on the question. Additionally, Berkin and Swedroe cover this subject in their book, with references to more academic literature.&lt;/p&gt;

&lt;h2 id=&quot;will-these-factors-continue-to-work&quot;&gt;Will these factors continue to work?&lt;/h2&gt;

&lt;p&gt;Obviously, we can’t see into the future, and we don’t know if the value, momentum, and trend factors will continue to predict performance. In particular, the value factor has come under scrutiny recently because it has underperformed since about 2007, and trendfollowing has performed poorly over the past decade (at least until the 2020 market crash, where it performed well). How concerned should we be that two out of these three factors haven’t worked well recently?&lt;/p&gt;

&lt;p&gt;This recent poor performance is probably due to bad luck. Research on this question supports the idea that these factors will indeed continue to work:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3400998&quot;&gt;Ilmanen et al. (2019)&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;31&lt;/a&gt;&lt;/sup&gt; found “little evidence for arbitrage activity influencing returns”.&lt;sup id=&quot;fnref:42&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:42&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;32&lt;/a&gt;&lt;/sup&gt; That means sophisticated investors are not exploiting these factors to an extent that causes the factors to become less effective. Similarly, according to &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2912287&quot;&gt;Blitz (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;33&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[F]or each factor there are not only funds which offer a large positive exposure, but also funds which offer a large negative exposure toward that factor. On aggregate, all factor exposures turn out to be close to zero, and plain market exposure is all that remains. This finding argues against the concern that factor premiums are rapidly being arbitraged away by investors in [exchange-traded funds].&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href=&quot;https://www.nber.org/system/files/working_papers/w25381/w25381.pdf&quot;&gt;Lettau et al. (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:73&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:73&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;34&lt;/a&gt;&lt;/sup&gt; found a similar result: in aggregate, mutual funds, ETFs, and hedge funds had &lt;em&gt;negative&lt;/em&gt; exposure to the value factor, &lt;em&gt;even among funds that claimed to be “value” funds&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.researchaffiliates.com/en_us/publications/articles/reports-of-values-death-may-be-greatly-exaggerated.html&quot;&gt;Arnott et al. (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;35&lt;/a&gt;&lt;/sup&gt; looked at the value factor in particular, which has had the worst performance in recent years, and concluded that the recent underperformance does &lt;em&gt;not&lt;/em&gt; suggest that value no longer works. They reviewed a number of popular explanations for why value investing might not work anymore, and found that while they had some explanatory power, none of them could fully explain value’s recent underperformance. Instead, according to the paper, value investing underperformed primarily because cheap companies became cheaper, and expensive companies became more expensive. Value investing outperforms when undervalued companies revert to reasonable valuations. But recently, undervalued companies have become even &lt;em&gt;more&lt;/em&gt; undervalued. That means, in the words of Arnott et al., “the stage is set for potentially historic outperformance of [the value factor] over the coming decade.”&lt;/p&gt;

&lt;p&gt;Dan Rasmussen’s 2020 letter &lt;a href=&quot;https://mailchi.mp/verdadcap/an-apology-for-small-cap-value-1305433&quot;&gt;An Apology for Small-Cap Value&lt;/a&gt; explained value’s recent poor performance in terms of increasing high-yield spreads and valuation spreads, and expressed a similar sentiment to Arnott et al.: “value is uniquely positioned to capitalize on perhaps a once- or twice-in-a-century convergence of opportunities.”&lt;/p&gt;

&lt;p&gt;In short: If value investing had stopped working, we would expect to see a narrowing of the valuation spread between cheap and expensive stocks. Instead, the spread has widened, which suggests that, if anything, value will work better in the coming years than it did in the past (although changes in earnings growth could dampen the performance of value).&lt;/p&gt;

&lt;p&gt;The 2019 paper &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/You-Cant-Always-Trend-When-You-Want&quot;&gt;You Can’t Always Trend When You Want&lt;/a&gt;&lt;sup id=&quot;fnref:71&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:71&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;36&lt;/a&gt;&lt;/sup&gt; broke down the performance of the trend factor into two components: the existence of trends and the ability of trendfollowing strategies to profit off of them.&lt;sup id=&quot;fnref:72&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:72&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;37&lt;/a&gt;&lt;/sup&gt; If trendfollowing doesn’t work anymore, we’d expect to see a decrease in trading profit per trend. But profits have been remarkably close to the long-run historical average. Instead, the recent poor performance of trendfollowing is explained by a lack of trends in most markets. The paper found that both the existence of trends and the ability to make money off of them vary over time, and the current period is not particularly unusual.&lt;/p&gt;

&lt;p&gt;(Additionally, many investors benchmark their portfolios against the S&amp;amp;P 500, which has performed particularly well over the past decade. So all sorts of diversifying strategies, including not just factors but international equities, bonds, commodities, etc., look bad by comparison.)&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1949008&quot;&gt;Blitz (2011)&lt;/a&gt;&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;38&lt;/a&gt;&lt;/sup&gt; showed that “the theoretically optimal strategic allocation to these [factors] is sizable, even when using highly conservative assumptions regarding their future expected magnitudes.” The optimal portfolio should include tilts toward value, momentum, and trend, even if these factors will only provide slightly positive outperformance going forward. That means if the overall altruistic portfolio has net zero exposure to these factors, altruists on the margin might want to put their entire portfolios into value/momentum/trend, even if the factors have a low expected return (as long as the expected return is positive).&lt;/p&gt;

&lt;p&gt;In a 2015 article, &lt;a href=&quot;https://www.aqr.com/Insights/Perspectives/How-Can-a-Strategy-Still-Work-If-Everyone-Knows-About-It&quot;&gt;How Can a Strategy Everyone Knows About Still Work?&lt;/a&gt;,&lt;sup id=&quot;fnref:57&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:57&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;39&lt;/a&gt;&lt;/sup&gt; Cliff Asness gives some reasons to expect factor investing to continue to work even when it’s well known, and how to think about factors and their expected performance going forward.&lt;/p&gt;

&lt;p&gt;For more on this subject, see Berkin and Swedroe, Chapter 8: “Does Publication Reduce the Size of Premiums?”&lt;/p&gt;

&lt;p&gt;So the value, momentum, and trend factors can predict asset performance. And although we can’t know for sure, it seems likely that these factors aren’t going away any time soon.&lt;/p&gt;

&lt;p&gt;Berkin and Swedroe also identified size, carry, and profitability/quality as factors, and low-volatility and credit as potential factors. The evidence on these isn’t as strong, so let’s focus on the most robust factors: value, momentum, and trend. How can we use these factors to construct an uncorrelated investment portfolio?&lt;/p&gt;

&lt;h2 id=&quot;factor-correlations&quot;&gt;Factor correlations&lt;/h2&gt;

&lt;p&gt;The value and momentum factors, as constructed in the previously-cited paper &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf&quot;&gt;Value and Momentum Everywhere&lt;/a&gt;,&lt;sup id=&quot;fnref:51:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:51&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt; had nearly zero correlation to the market and negative correlations to each other (r=–0.5). &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/DemystifyingManagedFutures.pdf&quot;&gt;Hurst, Ooi, and Pedersen (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:58&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:58&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;40&lt;/a&gt;&lt;/sup&gt; showed that a simple trendfollowing strategy had close to zero correlation to stocks, bonds, or commodities. According to &lt;a href=&quot;http://docs.lhpedersen.com/TimeSeriesMomentum.pdf&quot;&gt;Time Series Momentum&lt;/a&gt;,&lt;sup id=&quot;fnref:54:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:54&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;25&lt;/a&gt;&lt;/sup&gt; the trend factor was only weakly explained by the market beta, value, and momentum factors (r&lt;sup&gt;2&lt;/sup&gt;=0.3).&lt;/p&gt;

&lt;p&gt;As a simple replication with a longer but narrower data set, I used the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French data library&lt;/a&gt; to look at the correlation between the market beta, value, and momentum factors on US stocks from 1927 to 2019. According to the data, market beta and value had a correlation of r=0.23, market beta and momentum had r=–0.34, while value and momentum had r=–0.41.&lt;/p&gt;

&lt;p&gt;Maybe correlation is too technical of a measurement. How does including value, momentum, and trend actually affect the performance of a portfolio? In particular, how do these factors tend to behave when the market performs poorly?&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://images.aqr.com/-/media/AQR/Documents/Alternative-Thinking/Alternative-thinking-3Q18.pdf&quot;&gt;AQR (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:59&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:59&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;41&lt;/a&gt;&lt;/sup&gt; investigated what investments historically provided the biggest benefit during market downturns. It looked at six different investment strategies over the past 90 years to see how they interacted with a stock portfolio both in general and in bad times (plus an additional five strategies over the most recent 30 years). Two strategies stood out as the best: trendfollowing and “styles”—their term for a long/short factor portfolio using value, momentum, and two other factors (carry and low-volatility).&lt;/p&gt;

&lt;p&gt;My favorite chart from the AQR report:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/AQR-hedging-frontier.png&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;if-we-dont-short-individual-assets&quot;&gt;If we don’t short individual assets&lt;/h3&gt;

&lt;p&gt;An academic factor portfolio buys stocks (or other types of assets) that look good according to a particular factor, and short sells assets that look bad. By shorting stocks, we can cancel out the market beta factor: when the market goes up, the long positions rise while the short positions fall (and vice versa), so the fund overall doesn’t move (much). This allows the fund to gain pure exposure to other factors with no exposure to the market.&lt;/p&gt;

&lt;p&gt;But short-selling individual stocks can be expensive because you have to borrow the stocks you sell, and short lenders often charge high interest rates. Can we still build a value/momentum portfolio with low correlation to the stock market without shorting?&lt;/p&gt;

&lt;p&gt;Let’s do a simple test with the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French data library&lt;/a&gt;. Instead of investing in the long/short value and momentum factors, what if we invest in the top 10% best value stocks (measured by book-to-market, and equal weighted), as well as the top 10% momentum stocks?&lt;/p&gt;

&lt;p&gt;Using data from 1927–2019, the backtest gives the following results for the market, value stocks, and momentum stocks (ignoring fees and transaction costs):&lt;sup id=&quot;fnref:60&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:60&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;42&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Market&lt;/th&gt;
      &lt;th&gt;Value&lt;/th&gt;
      &lt;th&gt;Momentum&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Return&lt;/td&gt;
      &lt;td&gt;10.0%&lt;/td&gt;
      &lt;td&gt;19.4%&lt;/td&gt;
      &lt;td&gt;20.0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Standard Deviation&lt;/td&gt;
      &lt;td&gt;18.5%&lt;/td&gt;
      &lt;td&gt;35.6%&lt;/td&gt;
      &lt;td&gt;25.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Sharpe_ratio&quot;&gt;Sharpe Ratio&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;0.35&lt;/td&gt;
      &lt;td&gt;0.44&lt;/td&gt;
      &lt;td&gt;0.64&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Correlation to Market&lt;/td&gt;
      &lt;td&gt;1&lt;/td&gt;
      &lt;td&gt;0.75&lt;/td&gt;
      &lt;td&gt;0.84&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The value and momentum portfolios show impressive performance compared to the market, but unfortunately, they still have pretty high correlation. If we want uncorrelated returns, this doesn’t seem like it will work.&lt;/p&gt;

&lt;p&gt;What if instead, as suggested &lt;a href=&quot;https://alphaarchitect.com/2014/12/15/an-affordable-tax-efficient-longshort-hedge-fund-solution/&quot;&gt;by Alpha Architect&lt;/a&gt;, we short the index? We exploit the value or momentum factor on the long side, but on the short side we just hold the market. According to the linked article, we can short a market index for relatively cheap, and they’re more tax-efficient than stocks.&lt;/p&gt;

&lt;p&gt;If we take the article’s assumption that the short proceeds equal the risk-free rate minus 0.25 percentage points, then according to the backtest, the long-factor/short-market portfolios performed as follows:&lt;sup id=&quot;fnref:61&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:61&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;43&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Value&lt;/th&gt;
      &lt;th&gt;Momentum&lt;/th&gt;
      &lt;th&gt;Combined Val/Mom&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Return&lt;/td&gt;
      &lt;td&gt;9.6%&lt;/td&gt;
      &lt;td&gt;9.5%&lt;/td&gt;
      &lt;td&gt;10.1%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Standard Deviation&lt;/td&gt;
      &lt;td&gt;24.8%&lt;/td&gt;
      &lt;td&gt;13.9%&lt;/td&gt;
      &lt;td&gt;16.6%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Sharpe Ratio&lt;/td&gt;
      &lt;td&gt;0.24&lt;/td&gt;
      &lt;td&gt;0.43&lt;/td&gt;
      &lt;td&gt;0.52&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Correlation to Market&lt;/td&gt;
      &lt;td&gt;0.34&lt;/td&gt;
      &lt;td&gt;0.21&lt;/td&gt;
      &lt;td&gt;0.34&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;That’s more like it—these portfolios have much lower correlations to the market. We could even get the correlation down to zero by increasing the size of the short position, at the cost of worse return.&lt;/p&gt;

&lt;h2 id=&quot;optimized-historical-portfolios&quot;&gt;Optimized historical portfolios&lt;/h2&gt;

&lt;p&gt;Nobody knows how factor investing will perform in the future. For illustrative purposes, let’s construct theoretically optimal portfolios under different possible futures and see what they look like.&lt;/p&gt;

&lt;p&gt;For simplicity, suppose we have three investment choices: the global equities market, the value factor, and the momentum factor. According to &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf&quot;&gt;Asness et al. (2013)&lt;/a&gt;&lt;sup id=&quot;fnref:36&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:36&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;44&lt;/a&gt;&lt;/sup&gt;, these investments had historical volatilities of 16%, 11%, and 12%, respectively. Both factors had close to zero correlation to the market, and had –0.5 correlation to each other. Let’s assume all these numbers will continue into the future, but make different assumptions about future returns.&lt;/p&gt;

&lt;p&gt;If we naively take the historical performance figures from Asness et al., we get about a 5% expected return for the market, 6% for value, and 7% for momentum. Under these assumptions, the optimal portfolio&lt;sup id=&quot;fnref:38&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:38&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;45&lt;/a&gt;&lt;/sup&gt; allocates 9% to the market, 47% to value, and 44% to momentum.&lt;sup id=&quot;fnref:37&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:37&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;46&lt;/a&gt;&lt;/sup&gt; Among these three investment options, the optimal historical portfolio contained &lt;em&gt;almost no&lt;/em&gt; exposure to the market!&lt;/p&gt;

&lt;p&gt;Let’s see what happens if we conservatively assume that the value and momentum factors will return a mere 1% each, while the market return stays at 5%. In this case, the theoretically optimal allocation is 39% to market, 32% to value, and 29% to momentum. Even in this conservative case, the (theoretically) best portfolio still allocates more than half to the value and momentum factors. (&lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1949008&quot;&gt;Blitz (2011)&lt;/a&gt;&lt;sup id=&quot;fnref:12:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;38&lt;/a&gt;&lt;/sup&gt; finds a similar result, although it looks at long-only factor portfolios rather than long/short.)&lt;/p&gt;

&lt;p&gt;We could try to tightly optimize our factor allocation by projecting future returns and then investing in the optimal portfolio according to our projections. But it’s hard to predict optimal allocations in advance, and equal-weighted portfolios tend to work just as well out of sample (&lt;a href=&quot;http://faculty.london.edu/avmiguel/DeMiguel-Garlappi-Uppal-RFS.pdf&quot;&gt;DeMiguel, Garlappi, and Uppal, 2007&lt;/a&gt;&lt;sup id=&quot;fnref:39&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:39&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;47&lt;/a&gt;&lt;/sup&gt;). So in this example, we could simply allocate 1/3 to the market, 1/3 to value, and 1/3 to momentum.&lt;/p&gt;

&lt;p&gt;These simulated examples suggest that the altruistic portfolio should allocate substantial sums to value, momentum, and maybe other factors. Almost no altruists do this. Therefore, it seems plausible that, on the margin, people should allocate 100% of their altruistic investments to long/short factors.&lt;/p&gt;

&lt;h1 id=&quot;uncorrelated-investments-in-practice&quot;&gt;Uncorrelated investments in practice&lt;/h1&gt;

&lt;p&gt;If we want to move from the land of academia into the real world, we need to deal with some practical considerations:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;How can we actually invest in the value, momentum, and trend factors?&lt;/li&gt;
  &lt;li&gt;How can we compensate for our emotions and behavioral biases that make factor investing difficult?&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;funds-i-like&quot;&gt;Funds I like&lt;/h2&gt;

&lt;p&gt;What are some actual investable funds that implement factor strategies like the ones I’ve discussed?&lt;/p&gt;

&lt;p&gt;I haven’t spent much time surveying the landscape of funds, and I can’t claim to know which ones are best. If I were investing a lot of money, I’d want to spend much more time learning about what investment options are available and how they work. But it feels unfair of me to talk about factor investing and then not provide any practical suggestions. So as a starting point, I can talk about a few of the funds that I like.&lt;/p&gt;

&lt;p&gt;Finding a good factor fund requires a little more than just searching for an ETF or mutual fund with “value” or “momentum” in its name. Many purported value/momentum funds &lt;a href=&quot;https://alphaarchitect.com/2017/04/06/visual-active-share-a-tool-to-help-investors-make-better-decisions/&quot;&gt;are just index funds in disguise&lt;/a&gt;. We want to find funds that provide concentrated factor exposure, not just market beta with a weak tilt.&lt;/p&gt;

&lt;p&gt;Astute readers will notice that I’ve cited a lot of papers published by researchers at &lt;a href=&quot;https://www.aqr.com/&quot;&gt;AQR Capital Management&lt;/a&gt;. It sure would be nice if they would use all this great research to build some mutual funds that take advantage of the value, momentum, and trend factors!&lt;/p&gt;

&lt;p&gt;Luckily for us, they do exactly that. I briefly reviewed all the mutual funds run by AQR, and based on what I read in the fund documents, I believe two funds give the best exposure to factors with low correlation to stocks and bonds:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://funds.aqr.com/funds/aqr-style-premia-alternative-fund&quot;&gt;AQR Style Premia Alternative Fund&lt;/a&gt; for exposure to value and momentum&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://funds.aqr.com/funds/aqr-managed-futures-strategy-hv-fund&quot;&gt;AQR Managed Futures Strategy HV Fund&lt;/a&gt;&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;48&lt;/a&gt;&lt;/sup&gt; for exposure to trendfollowing&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(The term “managed futures” can describe a range of investment strategies, but it usually refers to a trendfollowing strategy, as it does in the case of this fund). Buying these two funds together should provide exposure to the value, momentum, and trend factors. AQR Style Premia Alternative Fund also invests in the carry and “defensive”&lt;sup id=&quot;fnref:43&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:43&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;49&lt;/a&gt;&lt;/sup&gt; factors. I haven’t talked about those factors in this essay because they’re not backed by as much evidence as value/momentum/trend, but allocating some money to them might not be a bad idea. (Notice that these are the same four factors used in the “styles” strategy from &lt;a href=&quot;https://images.aqr.com/-/media/AQR/Documents/Alternative-Thinking/Alternative-thinking-3Q18.pdf&quot;&gt;AQR (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:59:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:59&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;41&lt;/a&gt;&lt;/sup&gt;, cited previously in this essay.)&lt;/p&gt;

&lt;p&gt;I found these funds independently of Berkin and Swedroe, but as a corroboration, their book recommends these two funds for combined factor exposure and for trendfollowing exposure, respectively.&lt;sup id=&quot;fnref:62&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:62&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;50&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Unluckily for some people, AQR funds have a $1 million minimum investment, so you can’t invest in both these funds unless you have $2 million. (There are three exceptions: (1) you can buy AQR funds in a retirement account with a lower minimum (but it probably doesn’t make sense to keep your altruistic funds in a retirement account); (2) if you live in Australia, you can buy &lt;a href=&quot;https://australia.aqr.com/&quot;&gt;AQR Australia funds&lt;/a&gt; with a minimum investment of $25,000; (3) some investment advisors can invest their clients’ money in AQR funds, even if they have less than $1 million.)&lt;/p&gt;

&lt;p&gt;But good trendfollowing funds “all pretty much [do] the same thing”, according to Eric Crittenden, former manager of a trendfollowing fund-of-funds.&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;51&lt;/a&gt;&lt;/sup&gt; I like the &lt;a href=&quot;https://kfafunds.com/kmlm/&quot;&gt;KFA Mount Lucas Index Strategy ETF (KMLM)&lt;/a&gt;,&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;52&lt;/a&gt;&lt;/sup&gt; which has a lower fee and no minimum investment.&lt;/p&gt;

&lt;p&gt;If you want other ideas, &lt;a href=&quot;https://www.rcmalternatives.com/2018/07/infographic-the-top-100-managed-futures-programs/&quot;&gt;this graphic&lt;/a&gt; shows the largest trendfollowing managed futures funds as of 2018.&lt;/p&gt;

&lt;p&gt;Outside of AQR, it’s a little harder to find a good value or momentum fund with no correlation to the stock market. The investment firm &lt;a href=&quot;https://alphaarchitect.com/2020/04/17/factor-return-library-beta-release/&quot;&gt;Alpha Architect&lt;/a&gt; offers my favorite long-only value and momentum stock ETFs: &lt;a href=&quot;https://www.etf.com/QVAL&quot;&gt;QVAL&lt;/a&gt;, &lt;a href=&quot;https://www.etf.com/IVAL&quot;&gt;IVAL&lt;/a&gt;, &lt;a href=&quot;https://www.etf.com/QVAL&quot;&gt;QMOM&lt;/a&gt;, and &lt;a href=&quot;https://www.etf.com/IVAL&quot;&gt;IMOM&lt;/a&gt;. (Disclosure: I invest in these funds.) I like these funds because they provide much higher factor exposure than most. A more traditional fund such as &lt;a href=&quot;https://www.etf.com/VTV&quot;&gt;Vanguard Value ETF&lt;/a&gt; provides large exposure to market beta, but only weakly tilts toward the value factor. The Alpha Architect funds are more expensive than Vanguard funds, but they’re just as cheap after adjusting for factor exposure.&lt;/p&gt;

&lt;p&gt;The Alpha Architect founders&lt;sup id=&quot;fnref:67&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:67&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;53&lt;/a&gt;&lt;/sup&gt; have published two books, &lt;a href=&quot;https://www.amazon.com/Quantitative-Value-Web-Site-Practitioners/dp/1118328078&quot;&gt;Quantitative Value&lt;/a&gt; and &lt;a href=&quot;https://www.amazon.com/Quantitative-Momentum-Practitioners-Momentum-Based-Selection/dp/111923719X&quot;&gt;Quantitative Momentum&lt;/a&gt;, describing their methodology in detail and explaining why they made the implementation choices that they did. They make some (well-justified) changes to the pure value and momentum factors, incorporating tilts toward quality and low volatility. So their funds may perform a bit better in expectation than a vanilla value or momentum fund (although the main benefit comes from concentrated exposure to the value and momentum factors, not the minor implementation changes). I won’t talk about how exactly these four funds work or why I expect them to outperform “pure” value/momentum, but the books go into lots of detail on this. Alpha Architect has also published two shorter articles summarizing how the funds work: &lt;a href=&quot;https://alphaarchitect.com/2014/10/07/the-quantitative-value-investing-philosophy/&quot;&gt;The Quantitative Value Investing Philosophy&lt;/a&gt; and &lt;a href=&quot;https://alphaarchitect.com/2015/12/01/quantitative-momentum-investing-philosophy/&quot;&gt;The Quantitative Momentum Investing Philosophy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But these funds are long-only rather than long/short. As we saw &lt;a href=&quot;#if-we-dont-short-individual-assets&quot;&gt;previously&lt;/a&gt;, long-only equity funds likely have a high correlation to the broad market. According to Alpha Architect’s &lt;a href=&quot;https://alphaarchitect.com/2020/04/17/factor-return-library-beta-release/&quot;&gt;simulated historical data&lt;/a&gt; going back to 1992, an equal-weighted combination of the the four strategies (QVAL/IVAL/QMOM/IMOM) would have had a correlation to the stock market of 0.68.&lt;sup id=&quot;fnref:63&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:63&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;54&lt;/a&gt;&lt;/sup&gt; But if we bought the Alpha Architect ETFs while shorting market index funds, as discussed &lt;a href=&quot;#if-we-dont-short-individual-assets&quot;&gt;previously&lt;/a&gt;, the resulting portfolio actually had a negative correlation to the market (r=–0.30). Alpha Architect’s Jack Vogel suggests a similar approach in his article, &lt;a href=&quot;https://alphaarchitect.com/2014/12/15/an-affordable-tax-efficient-longshort-hedge-fund-solution/&quot;&gt;How to Create a Tax-Efficient Hedge Fund&lt;/a&gt;. He proposes buying value/momentum stocks while shorting the S&amp;amp;P 500. He claims that shorting indexes is “much cheaper on an after-cost, after-tax basis for a taxable investor” than shorting individual stocks.&lt;/p&gt;

&lt;p&gt;(In practice, it probably doesn’t make sense to go fully market neutral. In theory, the larger your short position, the more leverage you can use, so your expected return doesn’t change; but after considering real-world costs and frictions, increasing the size of the short position reduces expected return. I believe the short side should be around 1/3 to 1/2 as big as the long side, based on an analysis I did &lt;a href=&quot;https://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints/&quot;&gt;here&lt;/a&gt;, in which I explain how I derived this estimate.)&lt;/p&gt;

&lt;p&gt;I like Alpha Architect’s funds because they provide concentrated factor exposure, and the firm openly discusses the intricate details of their methodology. But unlike the AQR Style Premia Alternative Fund, the Alpha Architect funds only use equities. The AQR fund also uses bonds, commodities, and currencies, so it can probably provide better diversification (although it’s somewhat debatable whether a value/momentum factor fund should include asset classes other than equities). On the other hand, the Alpha Architect funds concentrate in a small number of stocks, which &lt;a href=&quot;https://mdickens.me/2021/02/08/concentrated_stock_selection/&quot;&gt;works better&lt;/a&gt; for small investors. And ETFs are generally &lt;a href=&quot;https://alphaarchitect.com/2014/04/01/why-etfs-are-more-tax-efficient-than-mutual-funds/&quot;&gt;more tax-efficient&lt;/a&gt; than mutual funds. Ultimately it’s not clear whether to prefer the AQR Style Premia Alternative Fund or a long/short strategy using the Alpha Architect ETFs, but either choice probably works fine.&lt;/p&gt;

&lt;p&gt;This is how I personally would invest my altruistic money, given what I know right now:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;I’d put about half in a long/short strategy where I buy QVAL/IVAL/QMOM/IMOM while shorting a US index fund and an international index fund, with the short position being about 1/3 to 1/2 as large as the long position.&lt;/li&gt;
  &lt;li&gt;I’d put the other half in &lt;a href=&quot;https://kfafunds.com/kmlm/&quot;&gt;KFA Mount Lucas Index Strategy ETF&lt;/a&gt;.&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;55&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;I’d &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;use leverage&lt;/a&gt;—probably between 2:1 and 3:1.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Update 2020-12-14&lt;/strong&gt;: I wrote a new essay that goes into more detail on asset allocation: &lt;a href=&quot;https://mdickens.me/2020/12/14/asset_allocation_for_altruists_with_constraints/&quot;&gt;Asset Allocation and Leverage for Altruists with Constraints&lt;/a&gt;. See the linked essay for further discussion.&lt;/p&gt;

&lt;p&gt;There are many details to consider when making investments in practice, and I can’t cover all of them. If you’re considering building a factor portfolio, do more research or speak to a good investment advisor&lt;sup id=&quot;fnref:69&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:69&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;56&lt;/a&gt;&lt;/sup&gt; to ensure you understand the implementation details.&lt;/p&gt;

&lt;p&gt;I will briefly mention a few such details:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Different funds will have different tax consequences, which can substantially impact the long-run return of a portfolio.&lt;/li&gt;
  &lt;li&gt;Some ETFs, including the Alpha Architect funds, have fairly low volume, so traders must be careful to avoid paying large &lt;a href=&quot;https://en.wikipedia.org/wiki/Bid%E2%80%93ask_spread&quot;&gt;bid/ask spreads&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Wealthy investors can ask ETF issuers to issue new shares directly to them, rather than having to buy shares on the open market. This can avert liquidity issues.&lt;/li&gt;
  &lt;li&gt;You could pay an investment manager to develop a custom factor portfolio, which might come closer to ideal than the funds I proposed using. (I am sure Alpha Architect would &lt;a href=&quot;https://alphaarchitect.com/customsolutions/&quot;&gt;provide this service&lt;/a&gt; for a reasonable fee—they already run custom strategies for some clients.) This approach seems especially appealing for extremely wealthy investors.&lt;/li&gt;
  &lt;li&gt;Altruists might &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;prefer to use leverage&lt;/a&gt;, perhaps a lot of leverage. Investors who want leverage need to decide &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/#Possible_leverage_methods&quot;&gt;how to get it&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Altruists might prefer high-risk, high-return funds, especially if they can’t use leverage. For example, AQR Style Premia Alternative Fund targets 10% volatility, which is low enough that investors who can’t use leverage might prefer to invest in something more volatile.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;behavioral-concerns&quot;&gt;Behavioral concerns&lt;/h2&gt;

&lt;p&gt;Factors like value and momentum probably arise from &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=227016&quot;&gt;behavioral biases&lt;/a&gt;&lt;sup id=&quot;fnref:7:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;28&lt;/a&gt;&lt;/sup&gt; that most investors share. People naturally compare their investments’ performance to the performance of the stock market. But a long/short factor portfolio (or really any sort of &lt;a href=&quot;https://en.wikipedia.org/wiki/Market_neutral&quot;&gt;market-neutral&lt;/a&gt; portfolio) can deviate from the market for a decade or more at a time. Almost everyone naturally tracks their performance against the stock market, and doesn’t like to deviate too much. Even long-term-oriented funds such as endowments face heavy criticism after &lt;a href=&quot;https://mebfaber.com/2016/09/29/harvard-crimson-enroll-investing-101/&quot;&gt;a single bad year&lt;/a&gt;. According to a survey by State Street, 40% of institutions would fire a manager after a single bad year, and 89% would fire them after two bad years (&lt;a href=&quot;https://rhsfinancial.com/2016/08/22/the-worlds-money-is-being-managed-by-people-with-completely-unrealistic-expectations/&quot;&gt;more&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Most people don’t actually want zero correlation to the market. We want something that goes up when the market goes down, but that doesn’t go down when the market goes up. (Please let me know if you ever discover such a mythical investment!&lt;sup id=&quot;fnref:64&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:64&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;57&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://alphaarchitect.com/2018/08/23/academic-factor-portfolios-are-extremely-painful-unless-you-are-an-alien/&quot;&gt;Factor portfolios are extremely painful. Unless you are an alien.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This raises a practical problem: we’re not aliens. So how do we invest in factor portfolios without giving up when times get tough?&lt;/p&gt;

&lt;p&gt;Trendfollowing fund manager Eric Crittenden tells a story about this:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;I would show people the calendar-year returns of a managed futures index and let them compare those calendar-year returns to the US stock market and ask them, how would you feel about making a 5% allocation to this managed futures index? And the vast majority of them enthusiastically declined to make an allocation to managed futures. And I said, are you sure? Not even 5%? And they said no way, my clients are gonna fire me, it underperforms, this thing sucks. Why would I do that? I said OK, alright, no problem, let’s kick the managed futures out.&lt;/p&gt;

  &lt;p&gt;And then I would show them this thing that I call “all-weather” and I didn’t tell them what it was. But between you and me, it’s just a 50/50 split of managed futures and US stocks. But they don’t know. So they do the same thing—they scroll, down they look at the years, and they say, “All right, now we’re talking. This is something I can get behind.” […] And then I reveal to them it’s just a 50/50 split of the thing you kicked out and the thing you [already invest in]—and they’re baffled. Absolutely baffled. To the point where they don’t believe me so I have to give them the spreadsheet and let them look at it and see the math.&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;58&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(Crittenden ended up &lt;a href=&quot;https://www.standpointfunds.com/&quot;&gt;starting a firm&lt;/a&gt; that invests in both equities and trendfollowing—much like the 50/50 split he describes.)&lt;/p&gt;

&lt;p&gt;This story suggests that the best way to manage our behavioral issues might be to give up on the idea of investing in a strategy that’s &lt;em&gt;totally&lt;/em&gt; uncorrelated to the market, and instead do something that’s only &lt;em&gt;partially&lt;/em&gt; correlated, such as a mix between trendfollowing/momentum/value and a conventional equity index.&lt;/p&gt;

&lt;p&gt;Some more suggestions, taken from talks by &lt;a href=&quot;https://www.youtube.com/watch?v=rQFK0j6n02g&quot;&gt;Meb Faber&lt;/a&gt;&lt;sup id=&quot;fnref:65&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:65&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;59&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;https://www.youtube.com/watch?v=2Fkr4lIpJDU&quot;&gt;Wesley Gray&lt;/a&gt;:&lt;sup id=&quot;fnref:66&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:66&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;60&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Create a set of rules and then stick to them. If possible, automate execution—that way, you never have to force yourself to make an uncomfortable trade.&lt;/li&gt;
  &lt;li&gt;Share your investing plan with someone who can help hold you accountable.&lt;/li&gt;
  &lt;li&gt;Hire an investment advisor to keep you on track/provide behavioral coaching.&lt;/li&gt;
  &lt;li&gt;Make sure you understand what you’re investing in and why you expect it to work.&lt;/li&gt;
  &lt;li&gt;Study the history of your investment strategy, particularly the times when it’s performed badly. Cliff Asness of AQR has a rule of thumb on what to expect from a strategy during bad times: take the biggest historical drawdown and double it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The way I think about it: if I’m losing money during a market upswing, then I can’t donate as much. But that’s okay because other altruists will donate more. I don’t think of my altruistic investments as my own money, I think of it as the world’s money. And I’m happy as long as the world gets richer overall.&lt;/p&gt;

&lt;p&gt;Ultimately, I don’t believe there’s a &lt;em&gt;great&lt;/em&gt; way to overcome our behavioral biases—if there were, factor investing would stop working. But I’m hoping the sort of person who’s crazy enough to donate 10% of their income to cash transfers or AI safety or wild animal welfare is also crazy enough to stick with a weird, unpopular investment portfolio even during bad times.&lt;/p&gt;

&lt;h1 id=&quot;appendix&quot;&gt;Appendix&lt;/h1&gt;

&lt;h2 id=&quot;appendix-a-factors-other-than-valuemomentumtrend&quot;&gt;Appendix A: Factors other than value/momentum/trend&lt;/h2&gt;

&lt;p&gt;In this essay, I focused on value, momentum, and trendfollowing because these are the strongest, most robust factors. But it might make sense to diversify across more than three factors. As shown in &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1949008&quot;&gt;Blitz (2011)&lt;/a&gt;&lt;sup id=&quot;fnref:12:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;38&lt;/a&gt;&lt;/sup&gt; and in my own analysis &lt;a href=&quot;#optimized-historical-portfolios&quot;&gt;above&lt;/a&gt;, even a factor with only slightly positive expected return should still receive a substantial allocation. That suggests we should invest in as many factors as we can, as long as the factors are reasonably credible.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;#funds-i-like&quot;&gt;funds I listed&lt;/a&gt; do include exposure to factors other than value and momentum. AQR Style Premia Fund includes the carry and “defensive” factors; QVAL/IVAL use quality; and QMOM/IMOM use low-volatility.&lt;/p&gt;

&lt;p&gt;It might make sense to invest at least a little money in &lt;a href=&quot;#what-are-the-factors&quot;&gt;every factor listed&lt;/a&gt; by Berkin and Swedroe, with the largest allocations going to the value, momentum, and trend factors. Any alleged factors not included in Berkin and Swedroe’s list probably don’t have strong enough supporting evidence to justify investing in them.&lt;/p&gt;

&lt;h2 id=&quot;appendix-b-reducing-correlation-to-income&quot;&gt;Appendix B: Reducing correlation to income&lt;/h2&gt;

&lt;p&gt;Technically, we don’t just want to find investments with low correlation to a traditional stock/bond portfolio. Lots of altruists donate money out of their income, so we also want investments not to be correlated with income. In aggregate, income grows with the economy,&lt;sup id=&quot;fnref:70&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:70&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;61&lt;/a&gt;&lt;/sup&gt; so we want to reduce correlation to economic growth. Can we get that with the value, momentum, and trend factors?&lt;/p&gt;

&lt;p&gt;Yes. &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3082577&quot;&gt;Sheth and Tim (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:46&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:46&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;62&lt;/a&gt;&lt;/sup&gt; found that the value and momentum factors both performed worse in recessions, but still had positive performance on average. (During recessions, the market had substantially negative performance.) Additionally, during recessions, the pairwise correlations (market, value) and (market, momentum) both dropped below zero, suggesting that they provide even stronger diversification benefits in bad times. In an informal article, &lt;a href=&quot;https://texpers.blogspot.com/2020/08/recessions-and-factor-performance-what-history-tells-us.html&quot;&gt;Berkin (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:47&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:47&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;63&lt;/a&gt;&lt;/sup&gt; reproduced this result over a longer time horizon.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.ucc.ie/en/media/research/centreforinvestmentresearch/wp/TSMEconomicRisk_Final.pdf&quot;&gt;Hutchinson and O’Brien (2015)&lt;/a&gt;&lt;sup id=&quot;fnref:48&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:48&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;64&lt;/a&gt;&lt;/sup&gt; found a similar result for trendfollowing: it performed worse during recessions than expansions, but still showed positive return during both types of economic environment, and provided diversifying power during downturns. I was able to reproduce this result using AQR’s publicly available &lt;a href=&quot;https://www.aqr.com/Insights/Datasets/Time-Series-Momentum-Factors-Monthly&quot;&gt;time series momentum factor data&lt;/a&gt;.&lt;sup id=&quot;fnref:49&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:49&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;65&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;appendix-c-changelog&quot;&gt;Appendix C: Changelog&lt;/h2&gt;

&lt;p&gt;2020-12-04:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Add a reference to Babu et al. (2020), with explanation.&lt;/li&gt;
  &lt;li&gt;Update &lt;a href=&quot;#funds-i-like&quot;&gt;Funds I like&lt;/a&gt; based on some new analysis I’ve done since writing this essay. Previously, I wrote that I personally would invest in a fully market-neutral strategy. I now believe it makes more sense to have some market exposure because beyond a certain point, the cost of short-selling (in terms of reducing expected return) exceeds the diversification benefit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2021-01-29:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Update &lt;a href=&quot;#funds-i-like&quot;&gt;Funds I like&lt;/a&gt; to replace &lt;a href=&quot;https://www.axsinvestments.com/axs-chesapeake-strategy-fund/&quot;&gt;AXS Chesapeake Strategy Fund&lt;/a&gt; with &lt;a href=&quot;https://kfafunds.com/kmlm/&quot;&gt;KFA Mount Lucas Index Strategy ETF (KMLM)&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Change reference to “a forthcoming essay” to link to the actual essay.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2021-02-08:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Add a reference to Lettau et al. (2018) as a corroboration of Blitz (2017).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2022-05-31:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Minor wording improvements.&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I performed these calculations using &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/leverage.py&quot;&gt;leverage.py&lt;/a&gt;, adding the following code:&lt;/p&gt;

      &lt;pre&gt;&lt;code&gt;rra = 1
print(LeverageEnvironment(
    rra=rra, mu=0.03, sigma=0.12
).certainty_equivalent_return())
print(LeverageEnvironment(
    rra=rra, mu=0.9999*0.03 + 0.0001*0.01,
    sigma=np.sqrt((0.9999*0.12)**2 + (0.0001*0.12)**2)
).certainty_equivalent_return())
&lt;/code&gt;&lt;/pre&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:35&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The linked article primarily reviews a paper: Kizer and Grover (2017). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2965146&quot;&gt;Are REITs a Distinct Asset Class?&lt;/a&gt;&lt;/p&gt;

      &lt;p&gt;It also provides some analysis that’s not covered in Kizer and Grover. &lt;a href=&quot;#fnref:35&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:31&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Levine, Ooi, Richardson, and Sasseville (2016). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2856435&quot;&gt;Commodities for the Long Run.&lt;/a&gt; &lt;a href=&quot;#fnref:31&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:32&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For interested readers, it is worth noting that commodity futures had positive excess return even while in contango (although returns were higher during times of backwardation, with arithmetic means of 1.8% and 7.7%, respectively). &lt;a href=&quot;#fnref:32&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:34&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At least according to Levine et al., commodities provide more value when, rather than holding them long-only, we go long on commodity futures in &lt;a href=&quot;https://en.wikipedia.org/wiki/Normal_backwardation&quot;&gt;backwardation&lt;/a&gt; and short commodities in &lt;a href=&quot;https://en.wikipedia.org/wiki/Contango&quot;&gt;contango&lt;/a&gt;. This is essentially a long/short carry factor strategy. In a &lt;a href=&quot;#factor-investing&quot;&gt;later section&lt;/a&gt;, I will discuss in more general terms why I believe factor investing makes sense. &lt;a href=&quot;#fnref:34&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bogle and Nolan (2015). &lt;a href=&quot;https://jpm.pm-research.com/content/42/1/119&quot;&gt;Occam’s Razor Redux: Establishing Reasonable Expectations for Financial Market Returns.&lt;/a&gt; &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Radha (2020). &lt;a href=&quot;https://jpm.pm-research.com/content/early/2020/05/16/jpm.2020.1.160&quot;&gt;Using CAPE to Forecast Country Returns for Designing an International Country Rotation Portfolio.&lt;/a&gt; &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:44&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I know we haven’t gotten to the section on factor investing yet, but this strategy is a type of value factor strategy. Emerging equities are cheap and US equities are expensive, so a long/short strategy gains exposure to the value factor while mostly eliminating market beta exposure. &lt;a href=&quot;#fnref:44&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:28&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Notice that getting uncorrelated positive return and beating the market are equivalent—you can convert one into the other. If a manager beats the market but still has exposure to the market, you can get uncorrelated positive return by shorting the market. And if a manager provides uncorrelated positive return, you can beat the market by investing in both the market and that manager. &lt;a href=&quot;#fnref:28&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:40&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;According to Carhart (1997), &lt;a href=&quot;http://www.mkwinc.com/pdf/articles/PersistenceofMutualFundPerformancebyCarhart.pdf&quot;&gt;On Persistence in Mutual Fund Performance&lt;/a&gt;, some mutual funds do persistently beat the market, but their performance is almost entirely explained by their exposure to the size, value, and momentum factors. &lt;a href=&quot;#fnref:40&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Fama and French (1992). &lt;a href=&quot;http://www.business.unr.edu/faculty/liuc/files/badm742/fama_french_1992.pdf&quot;&gt;The Cross-Section of Expected Stock Returns.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Jegadeesh and Titman (1993). &lt;a href=&quot;http://www.business.unr.edu/faculty/liuc/files/BADM742/Jegadeesh_Titman_1993.pdf&quot;&gt;Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Fama and French (2004). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=570343&quot;&gt;Profitability, Growth, and Average Returns.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:45&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Introduction, page 24. &lt;a href=&quot;#fnref:45&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Quote is edited for brevity. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:50&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Berkin and Swedroe classify the factors a little differently. They identify seven factors: market beta, term, momentum, value, size, carry, and profitability/quality. They consider trendfollowing a strong investment strategy, but they don’t classify it as a separate factor. They also identify three commonly discussed factors that they don’t believe hold up: dividend yield, low-volatility, and default/credit. Gray does not include dividend yield on his list, presumably because he considers it sub-bronze (I would agree with that assessment).&lt;/p&gt;

      &lt;p&gt;Berkin and Swedroe acknowledge that trendfollowing has substantial predictive power on top of momentum, so I’m not sure why they don’t consider it a separate factor. &lt;a href=&quot;#fnref:50&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:51&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Asness, Moskowitz, and Pedersen (2013). &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf&quot;&gt;Value and Momentum Everywhere.&lt;/a&gt; &lt;a href=&quot;#fnref:51&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:51:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Geczy and Samonov (2016). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2292544&quot;&gt;Two Centuries of Price Return Momentum.&lt;/a&gt; &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Baltussen, Swinkels, and van Vliet (2019). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3325720&quot;&gt;Global Factor Premiums.&lt;/a&gt; &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:52&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hou, Xue, and Zhang (2017). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2961979&quot;&gt;Replicating Anomalies.&lt;/a&gt; &lt;a href=&quot;#fnref:52&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:53&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Wesley Gray (2017). &lt;a href=&quot;https://alphaarchitect.com/2017/10/26/want-to-learn-more-about-factor-investing-read-this/&quot;&gt;Want to Learn More About Factor Investing? Read This.&lt;/a&gt; &lt;a href=&quot;#fnref:53&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Asness, Frazzini, Israel, and Moskowitz (2014). &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Fact-Fiction-and-Momentum-Investing&quot;&gt;Fact, Fiction and Momentum Investing.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:55&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Frazzini, Israel, and Moskowitz (2013). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2294498&quot;&gt;Trading Costs of Asset Pricing Anomalies.&lt;/a&gt; &lt;a href=&quot;#fnref:55&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:56&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Frazzini, Israel, and Moskowitz (2018). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3229719&quot;&gt;Trading Costs.&lt;/a&gt; &lt;a href=&quot;#fnref:56&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:54&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Moskowitz, Ooi, and Pedersen (2011). &lt;a href=&quot;http://docs.lhpedersen.com/TimeSeriesMomentum.pdf&quot;&gt;Time Series Momentum.&lt;/a&gt; &lt;a href=&quot;#fnref:54&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:54:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hurst, Ooi, and Pedersen (2017). &lt;a href=&quot;https://images.aqr.com/-/media/AQR/Documents/Insights/Journal-Article/AQR-JPM-Fall-2017.pdf&quot;&gt;A Century of Evidence on Trend-Following Investing.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Babu, Levine, Ooi, Pedersen, and Stamelos (2018). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3386035&quot;&gt;Trends Everywhere.&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lakonishok, Shleifer and Vishny (1993). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=227016&quot;&gt;Contrarian Investment, Extrapolation, and Risk.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:7:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lewellen (2011). &lt;a href=&quot;http://faculty.tuck.dartmouth.edu/images/uploads/faculty/jonathan-lewellen/Institutions.pdf&quot;&gt;Institutional Investors and the Limits of Arbitrage.&lt;/a&gt; &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Shleifer and Vishny (1997). &lt;a href=&quot;http://pages.stern.nyu.edu/~cedmond/phd/Shleifer%20Vishny%20JF%201997.pdf&quot;&gt;The Limits of Arbitrage.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ilmanen, Israel, Moskowitz, Thapar, and Wang (2019). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3400998&quot;&gt;How Do Factor Premia Vary Over Time? A Century of Evidence.&lt;/a&gt; &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:42&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://rady.ucsd.edu/docs/seminars/Pontiff-paper.pdf&quot;&gt;McLean and Pontiff (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:41&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:41&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;66&lt;/a&gt;&lt;/sup&gt; examined 82 published market anomalies and found that on average, they worked 35% less well post-publication. Ilmanen et al. (2019) re-examined the McLean and Pontiff data along with new pre-sample data and found that the anomalies on average decayed about as much in the pre-sample period as post-sample, suggesting that of the performance decay was attributable to overfitting, not arbitrage by sophisticated investors. They “acknowledge, however, that the power to detect arbitrage crowding into a factor and its effect on prices is challenging […]. For this reason, different tests and different samples may yield different inferences and hence may be why our conclusions differ from Mclean and Pontiff (2016)”.&lt;/p&gt;

      &lt;p&gt;My own take is that even though the post-sample decays did not statistically significantly differ from the pre-sample decays for any of the tested factors (with the exception of two), the factors did overall perform worse post-sample, which offers Bayesian evidence that arbitrage activity influenced returns (even if only weakly). &lt;a href=&quot;#fnref:42&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Blitz (2017). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2912287&quot;&gt;Are Exchange-Traded Funds Harvesting Factor Premiums?&lt;/a&gt; &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:73&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lettau, Ludvigson &amp;amp; Manoel (2018). &lt;a href=&quot;https://www.nber.org/system/files/working_papers/w25381/w25381.pdf&quot;&gt;Characteristics of Mutual Fund Portfolios: Where Are the Value Funds?&lt;/a&gt; &lt;a href=&quot;#fnref:73&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Arnott, Harvey, Kalesnik, and Linnainmaa (2020). &lt;a href=&quot;https://www.researchaffiliates.com/en_us/publications/articles/reports-of-values-death-may-be-greatly-exaggerated.html&quot;&gt;Reports of Value’s Death May Be Greatly Exaggerated.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:71&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Babu, Levine, Ooi, Schroeder, and Stamelos (2019). &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/You-Cant-Always-Trend-When-You-Want&quot;&gt;You Can’t Always Trend When You Want.&lt;/a&gt; &lt;a href=&quot;#fnref:71&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:72&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, it identified three components, but these two are the relevant ones for our purposes. &lt;a href=&quot;#fnref:72&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Blitz (2011). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1949008&quot;&gt;Strategic Allocation to Premiums in the Equity Market.&lt;/a&gt; &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:12:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:12:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:57&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Asness (2015). &lt;a href=&quot;https://www.aqr.com/Insights/Perspectives/How-Can-a-Strategy-Still-Work-If-Everyone-Knows-About-It&quot;&gt;How Can a Strategy Everyone Knows About Still Work?&lt;/a&gt; &lt;a href=&quot;#fnref:57&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:58&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hurst, Ooi, and Pedersen (2013). &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/DemystifyingManagedFutures.pdf&quot;&gt;Demystifying Managed Futures.&lt;/a&gt; &lt;a href=&quot;#fnref:58&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:59&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;AQR (2018). &lt;a href=&quot;https://images.aqr.com/-/media/AQR/Documents/Alternative-Thinking/Alternative-thinking-3Q18.pdf&quot;&gt;It Was the Worst of Times: Diversification During a Century of Drawdowns.&lt;/a&gt; &lt;a href=&quot;#fnref:59&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:59:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:60&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I used the data series titled “Portfolios Formed on Book-to-Market” and “10 Portfolios Formed on Momentum”. &lt;a href=&quot;#fnref:60&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:61&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I used the data series titled “Fama/French 3 Factors”, “Portfolios Formed on Book-to-Market”, and “10 Portfolios Formed on Momentum”. &lt;a href=&quot;#fnref:61&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:36&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Asness, Moskowitz, and Pedersen (2013). &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/ValMomEverywhere.pdf&quot;&gt;Value and Momentum Everywhere.&lt;/a&gt; &lt;a href=&quot;#fnref:36&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:38&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;“Optimal” is defined as the allocation that maximizes the Sharpe ratio. &lt;a href=&quot;#fnref:38&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:37&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Calculated using &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/mvo.py&quot;&gt;mvo.py&lt;/a&gt; &lt;a href=&quot;#fnref:37&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:39&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;DeMiguel, Garlappi, and Uppal (2007). &lt;a href=&quot;http://faculty.london.edu/avmiguel/DeMiguel-Garlappi-Uppal-RFS.pdf&quot;&gt;Optimal Versus Naive Diversification: How Inefficient is the 1/N Portfolio Strategy?&lt;/a&gt; &lt;a href=&quot;#fnref:39&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;AQR has two different managed futures funds, the only major difference being that one targets relatively low volatility, and the other targets high vol. Here I linked the high-volatility fund because the expenses are lower on a vol-adjusted basis, and &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;altruists probably prefer higher volatility&lt;/a&gt;. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:43&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In a few papers, AQR defines “defensive” as betting against beta, e.g., Ilmanen et al. (2019).&lt;sup id=&quot;fnref:10:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;31&lt;/a&gt;&lt;/sup&gt; Presumably, this fund uses the same definition. The details of the betting-against-beta factor are specified by Frazzini and Pedersen (2013), &lt;a href=&quot;http://pages.stern.nyu.edu/~lpederse/papers/BettingAgainstBeta.pdf&quot;&gt;Betting Against Beta.&lt;/a&gt; This is related to the low-volatility factor discussed by Berkin and Swedroe. &lt;a href=&quot;#fnref:43&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:62&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I originally read their book in 2016, and forgot most of its content. I identified these two AQR funds without any memory of what funds Berkin and Swedroe had recommended. Then, in the process of researching other parts of this essay, I re-read Berkin and Swedroe and saw that they recommended the two funds that I had already listed. &lt;a href=&quot;#fnref:62&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://flirtingwithmodels.libsyn.com/eric-crittenden-trend-following-s3e7&quot;&gt;Eric Crittenden – All-Weather Portfolios with Trend Following (S3E7).&lt;/a&gt; Flirting with Models podcast. 2020. Around 9:30. &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;When this essay was originally published, I wrote that I invested in the &lt;a href=&quot;https://www.axsinvestments.com/axs-chesapeake-strategy-fund/&quot;&gt;AXS Chesapeake Strategy Fund&lt;/a&gt;. In February 2021, I  moved my investment to &lt;a href=&quot;https://kfafunds.com/kmlm/&quot;&gt;KMLM&lt;/a&gt;, a new fund that launched shortly after I initially published this essay, primarily because (1) it charges lower fees on a vol-adjusted basis and (2) it’s an ETF instead of a mutual fund, which is more tax-efficient. There exist a few managed futures ETFs with lower fees, but I don’t invest in them because they have too little volatility. Investing in a cheap but low-vol fund means you need to use more leverage, so you end up paying more than if you had invested in the more expensive high-vol fund.&lt;/p&gt;

      &lt;p&gt;As of 2022, I invest in managed futures through a separately managed account instead of an ETF. But if I did use an ETF, I’d use KMLM. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:67&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Coincidentally (or perhaps not so coincidentally), both Wesley Gray and Cliff Asness, co-founders of Alpha Architect and AQR respectively, got their PhDs under Eugene Fama—one of the most influential living figures in economics and widely described as the father of the efficient market hypothesis.&lt;/p&gt;

      &lt;p&gt;Does this prove that, to run a really good asset management firm, you have to study under Eugene Fama? &lt;a href=&quot;#fnref:67&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:63&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I calculated this by combining Alpha Architect’s data with market data from the &lt;a href=&quot;https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French data library&lt;/a&gt;, specifically the series titled “Fama/French 3 Factors”. &lt;a href=&quot;#fnref:63&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Instead of a 50/50 split, it might be better to use &lt;a href=&quot;https://en.wikipedia.org/wiki/Risk_parity&quot;&gt;risk parity&lt;/a&gt; such that the portfolio is exposed to the same amount of volatility from each of the three factors. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:69&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Unfortunately, many (most?) investment advisors are kind of clueless, even about basic things like minimizing fees. So it’s important to find a good one. &lt;a href=&quot;#fnref:69&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:64&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Actually, I do know of an investment like this: Renaissance Technologies’ Medallion Fund. Unfortunately, they won’t accept my money. &lt;a href=&quot;#fnref:64&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://flirtingwithmodels.libsyn.com/eric-crittenden-trend-following-s3e7&quot;&gt;Eric Crittenden – All-Weather Portfolios with Trend Following (S3E7).&lt;/a&gt; Flirting with Models podcast. 2020. Around 54:00. &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:65&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Faber (2020). &lt;a href=&quot;https://www.youtube.com/watch?v=rQFK0j6n02g&quot;&gt;The Best Investment Opportunity in 40 Years.&lt;/a&gt; Interview at the Prime Quadrant conference. &lt;a href=&quot;#fnref:65&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:66&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Gray (2019). &lt;a href=&quot;https://www.youtube.com/watch?v=2Fkr4lIpJDU&quot;&gt;Factor Investing, Simple but Not Easy.&lt;/a&gt; &lt;a href=&quot;#fnref:66&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:70&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Not really, but it’s an okay approximation. &lt;a href=&quot;#fnref:70&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:46&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Sheth and Tim (2017). &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3082577&quot;&gt;Fama-French Factors and Business Cycles.&lt;/a&gt; &lt;a href=&quot;#fnref:46&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:47&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Berkin (2020). &lt;a href=&quot;https://texpers.blogspot.com/2020/08/recessions-and-factor-performance-what-history-tells-us.html&quot;&gt;Recessions and Factor Performance: What History Tells Us.&lt;/a&gt; &lt;a href=&quot;#fnref:47&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:48&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hutchinson and O’Brien (2015). &lt;a href=&quot;https://www.ucc.ie/en/media/research/centreforinvestmentresearch/wp/TSMEconomicRisk_Final.pdf&quot;&gt;Time Series Momentum and Macroeconomic Risk.&lt;/a&gt; &lt;a href=&quot;#fnref:48&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:49&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://www.aqr.com/Insights/Datasets/Time-Series-Momentum-Factors-Monthly&quot;&gt;Time Series Momentum: Factors, Monthly.&lt;/a&gt; AQR Data Library. AQR Capital Management, LLC. &lt;a href=&quot;#fnref:49&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:41&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;McLean and Pontiff (2016). &lt;a href=&quot;https://rady.ucsd.edu/docs/seminars/Pontiff-paper.pdf&quot;&gt;Does Academic Research Destroy Stock Return Predictability?&lt;/a&gt; &lt;a href=&quot;#fnref:41&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant</title>
				<pubDate>Mon, 23 Nov 2020 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2020/11/23/astronomical_ev_careers/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/11/23/astronomical_ev_careers/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;&lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Unlikely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. The reasoning goes: if &lt;a href=&quot;https://nickbostrom.com/astronomical/waste.html&quot;&gt;civilization has astronomically large potential&lt;/a&gt;, then apparently small actions could have compounding flow-through effects, ultimately affecting massive numbers of people in the long-run future. And the best action might do far more expected good than the second-best.&lt;/p&gt;

&lt;p&gt;I’m not convinced that causes differ astronomically in cost-effectiveness. But if they do, what does that imply about how altruists should choose their careers?&lt;/p&gt;

&lt;p&gt;Suppose I believe cause A is the best, and it’s astronomically better than any other cause. But I have some special skills that make me extremely well-suited to work on cause B. If I work directly on cause B, I can do as much good as a $100 million per year donation to the cause. Or instead, maybe I could get a minimum-wage job and donate $100 per year to cause A. If A is more than a million times better than B, then I should take the minimum-wage job, because the $100 I donate will do more good.&lt;/p&gt;

&lt;p&gt;This is an extreme example. Realistically, there are probably many career paths that can help the top cause. I expect I can find a job supporting cause A that fits my skill set. It might not be the &lt;em&gt;best&lt;/em&gt; job, but it’s probably not astronomically worse, either. If so, I can do much more good by working that job than by donating $100 per year.&lt;/p&gt;

&lt;p&gt;But I might not be able to find an appropriate job in the top cause area. As a concrete example, suppose AI safety matters astronomically more than global priorities research. If I’m a top-tier moral philosopher, I could probably make a lot of progress on prioritization research. But I could have a bigger impact by earning to give and donating to AI safety. Even if the stereotypes are true and my philosophy degree doesn’t let me get a well-paying job, I can still do more good by making a meager donation to AI alignment research than by working directly on a cause where my skills are relevant. Perhaps I can find a job supporting AI safety where I can use my expertise, but perhaps not.&lt;/p&gt;

&lt;p&gt;(This is just an example. I don’t think global priorities research is astronomically worse than AI safety.)&lt;/p&gt;

&lt;p&gt;This argument requires that causes differ astronomically in &lt;em&gt;relative&lt;/em&gt; cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.&lt;/p&gt;

&lt;p&gt;I suspect that &lt;a href=&quot;https://reducing-suffering.org/why-charities-dont-differ-astronomically-in-cost-effectiveness/&quot;&gt;causes don’t differ astronomically in cost-effectiveness&lt;/a&gt;. Therefore, people should pay attention to personal fit when choosing an altruistic career, and not just the importance of the cause.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Donor-Advised Funds vs. Taxable Accounts for Patient Donors</title>
				<pubDate>Mon, 19 Oct 2020 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2020/10/19/DAF_vs_taxable/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/10/19/DAF_vs_taxable/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;strong&gt;Update 2021-01-15&lt;/strong&gt;: While I still believe I identified the most relevant factors for comparing donor-advised funds and taxable accounts, I now believe my expected utility calculator has significant flaws, and it should probably not be used.&lt;/p&gt;

&lt;p&gt;&lt;del&gt;&lt;em&gt;&lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Likely.&lt;/em&gt;&lt;/del&gt;&lt;/p&gt;

&lt;p&gt;A donor-advised fund (DAF) is an investment account that allows donors to take a tax deduction now and give the money to charity later. When you put money into a DAF, you can deduct it just as you would deduct charitable contributions. Then you can direct the DAF on how to invest the money, and choose to donate it whenever you want.&lt;/p&gt;

&lt;p&gt;If you want to invest to give later, DAFs have some clear advantages, plus some limitations. Is it better to use a DAF, or to keep your money in an ordinary (taxable) investment account?&lt;/p&gt;

&lt;p&gt;According to the assumptions made in this essay:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If I want to invest in a portfolio of stocks and bonds, then I should use a DAF.&lt;/li&gt;
  &lt;li&gt;If I have the ability to use leverage or to invest in assets with low correlation to stocks and bonds, then I should keep my money in a taxable account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: Nothing in this post should be taken as investment advice or tax advice.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/vh6KPLquguraByLGW/donor-advised-funds-vs-taxable-accounts-for-patient-donors&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#how-do-dafs-work&quot; id=&quot;markdown-toc-how-do-dafs-work&quot;&gt;How do DAFs work?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#fixed-costs-taxes-and-fees&quot; id=&quot;markdown-toc-fixed-costs-taxes-and-fees&quot;&gt;Fixed costs (taxes and fees)&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#taxes-on-big-sales&quot; id=&quot;markdown-toc-taxes-on-big-sales&quot;&gt;Taxes on big sales&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#risk-of-complete-loss-of-capital&quot; id=&quot;markdown-toc-risk-of-complete-loss-of-capital&quot;&gt;Risk of complete loss of capital&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#investment-performance&quot; id=&quot;markdown-toc-investment-performance&quot;&gt;Investment performance&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#combining-the-factors&quot; id=&quot;markdown-toc-combining-the-factors&quot;&gt;Combining the factors&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#advanced-topic-uncorrelated-investments&quot; id=&quot;markdown-toc-advanced-topic-uncorrelated-investments&quot;&gt;Advanced topic: Uncorrelated investments&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#other-considerations&quot; id=&quot;markdown-toc-other-considerations&quot;&gt;Other considerations&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-how-to-calculate-expected-utility&quot; id=&quot;markdown-toc-appendix-how-to-calculate-expected-utility&quot;&gt;Appendix: How to calculate expected utility&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;how-do-dafs-work&quot;&gt;How do DAFs work?&lt;/h2&gt;

&lt;p&gt;(This article mainly covers investing in the United States, because that’s what I’m familiar with. I don’t know if people in other countries have access to anything like DAFs.)&lt;/p&gt;

&lt;p&gt;Investors can create a donor-advised fund through a DAF provider. The three most notable DAF providers are &lt;a href=&quot;https://www.fidelitycharitable.org/&quot;&gt;Fidelity&lt;/a&gt;, &lt;a href=&quot;https://www.vanguardcharitable.org/&quot;&gt;Vanguard&lt;/a&gt;, and &lt;a href=&quot;https://www.schwabcharitable.org/&quot;&gt;Schwab&lt;/a&gt;. These providers allow you to choose how to invest your DAF from a set of several pre-selected mutual funds. The exact funds vary between providers, but all three of them offer at least a US stock index fund, an international stock index fund, and a US bond index fund.&lt;/p&gt;

&lt;p&gt;These providers also charge an annual fee: 0.6% for accounts with less than $500,000, and lower fees for larger accounts (the exact fee depends on which provider you use).&lt;/p&gt;

&lt;p&gt;How do DAFs compare to ordinary investment accounts? To answer this question, let’s look at three factors:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Fixed costs (taxes and fees)&lt;/li&gt;
  &lt;li&gt;Risk of complete loss of capital&lt;/li&gt;
  &lt;li&gt;Investment performance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It’s easy to make comparisons within each of these factors. For example, whichever type of account has lower fixed costs is better along that dimension. But it’s not immediately obvious how to weigh these factors against each other. So I’ll start by looking at the three factors individually and see whether DAFs or taxable accounts come out ahead in each case. Then I’ll examine how to combine the factors, and use this to come up with an answer.&lt;/p&gt;

&lt;p&gt;Sometimes, investors have more choices than just a DAF or a taxable account (for example, investing through a foundation). But to keep things simple, I’ll just compare the two types of accounts.&lt;/p&gt;

&lt;h2 id=&quot;fixed-costs-taxes-and-fees&quot;&gt;Fixed costs (taxes and fees)&lt;/h2&gt;

&lt;p&gt;Ordinary investment accounts and DAFs each have a different kind of fixed cost. Ordinary accounts must pay taxes, and DAFs need to pay administrative fees.&lt;/p&gt;

&lt;p&gt;Ordinary investors have to pay capital gains taxes any time they sell an appreciated asset. To some extent, we can avoid incurring capital gains by investing in diversified &lt;a href=&quot;https://en.wikipedia.org/wiki/Exchange-traded_fund&quot;&gt;ETFs&lt;/a&gt; and not touching them until we want to donate. Then, instead of selling and donating cash, we can donate the ETFs directly.&lt;/p&gt;

&lt;p&gt;But most stock ETFs make regular dividend payouts, and we can’t avoid paying taxes on those. Usually&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, dividends are taxed as capital gains. Most people have to pay a 15% tax on capital gains, but investors in the &lt;a href=&quot;https://taxfoundation.org/2020-tax-brackets/&quot;&gt;top tax bracket&lt;/a&gt; have to pay 20%, plus an additional &lt;a href=&quot;https://www.irs.gov/individuals/net-investment-income-tax&quot;&gt;3.8% investment income tax&lt;/a&gt;. Some states charge more on top of that (e.g., California taxes capital gains at the same rate as income).&lt;/p&gt;

&lt;p&gt;Bonds are taxed at the higher income tax rates, with the exception of municipal bonds, which don’t require investors to pay any income tax. Investors who want to avoid taxes can hold municipal bonds. And in fact, altruistic investors might sometimes prefer not to hold any bonds at all, as a way of increasing risk and expected return. Either way, we care more about taxes on stock dividends than on bonds.&lt;/p&gt;

&lt;p&gt;(For more on the tax rates for different asset classes, see &lt;a href=&quot;https://www.bogleheads.org/wiki/Tax-efficient_fund_placement&quot;&gt;Tax-Efficient Fund Placement&lt;/a&gt; on Bogleheads.)&lt;/p&gt;

&lt;p&gt;As of October 2020, a &lt;a href=&quot;https://www.etf.com/VT&quot;&gt;global stock market ETF&lt;/a&gt; has a dividend yield of about 2%, which is close to the historical average&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;. An investor with an average income who lives in a state with no capital gains tax must pay a 15% tax on this 2%, which means they lose 0.3% of their portfolio to taxes each year. A wealthy investor in California &lt;a href=&quot;https://www.tax-brackets.org/californiataxtable&quot;&gt;has to pay&lt;/a&gt; 33.3% capital gains tax, which comes out to 0.67% of their portfolio.&lt;/p&gt;

&lt;p&gt;Compare that to a donor-advised fund’s administrative fees. Anyone with less than $500,000 in their DAF has to pay a 0.60% fee. Bigger DAFs pay lower fees. For example, Vanguard &lt;a href=&quot;https://www.vanguardcharitable.org/giving-with-vc/fees-and-minimums&quot;&gt;charges&lt;/a&gt; 0.30% on amounts between $500,000 and $1 million, and 0.13% on any amount above $1 million.&lt;/p&gt;

&lt;p&gt;In short, smallish donors (with less than about $1 million) will pay more in DAF fees than they would in taxes. But for wealthier donors both pay more in taxes and pay less in fees, so a DAF comes out ahead on this comparison.&lt;/p&gt;

&lt;h3 id=&quot;taxes-on-big-sales&quot;&gt;Taxes on big sales&lt;/h3&gt;

&lt;p&gt;Another consideration that’s relevant to some people: you might have a bunch of appreciated assets that you want to exchange for something else. If you sell them and buy diversified ETFs within a taxable account, you’ll have to pay taxes on the gains when you sell. But if you put the money in a donor-advised fund, you probably won’t owe any taxes. By giving the appreciated assets to a DAF, you avoid a big tax bill.&lt;/p&gt;

&lt;p&gt;This sounds appealing, and does matter under some circumstances. But for a sufficiently patient investor, it doesn’t matter. As an illustration: suppose that if you sell and rebalance in a taxable account, you’ll have to pay 33% tax, but then you expect your money to compound slightly faster than it would in a DAF—let’s say one percentage point faster. If you’re willing to wait 41 years, the 1% extra compounding will eventually make up for the initial tax loss.&lt;/p&gt;

&lt;h2 id=&quot;risk-of-complete-loss-of-capital&quot;&gt;Risk of complete loss of capital&lt;/h2&gt;

&lt;p&gt;Any investment account faces some risk that the money ultimately doesn’t get spent the way you want. Some types of risks affect both DAFs and taxable accounts. For example, if the financial system collapses, both a taxable account and a donor-advised fund will become worthless. But such risks don’t tell us anything about whether to prefer a DAF or a taxable account. So we should focus on risks that affect only one of the two types of accounts.&lt;/p&gt;

&lt;p&gt;Each type of account faces its own unique risk. A DAF has a risk of expropriation, and a taxable account risks value drift.&lt;/p&gt;

&lt;p&gt;DAFs face higher expropriation risks than ordinary investment accounts for two main reasons. First, DAFs have no legal obligation to follow your grant recommendations. &lt;a href=&quot;https://www.philanthropy.com/article/nevada-court-says-donor-advised-funds-can-ignore-donor-advice/&quot;&gt;Courts have ruled&lt;/a&gt; that donor-advised funds may spend the money however they want, without regard to donors’ preferences. It seems unlikely that the large, reputable DAFs like Vanguard, Fidelity, and Schwab would ever ignore donor recommendations, but it could happen.&lt;/p&gt;

&lt;p&gt;Second, DAFs can only operate the way they do because the government allows them to. They legally qualify as charities, so donors who give to DAFs can deduct their contributions. And unlike foundations, DAFs are not required to disburse 5% of their funds each year. The government might decide to revoke DAFs’ favorable tax status, or might impose limitations on them (perhaps in response to bad press, e.g., &lt;a href=&quot;https://www.nybooks.com/articles/2016/07/14/the-undermining-of-american-charity/&quot;&gt;1&lt;/a&gt;,  &lt;a href=&quot;https://ssir.org/articles/entry/the_problem_with_donor_advised_fundsand_a_solution&quot;&gt;2&lt;/a&gt;). That said, such legal changes probably wouldn’t destroy the value of all money held in DAFs, so we should be more concerned about the risk that the DAF provider refuses to abide by our recommendations.&lt;/p&gt;

&lt;p&gt;While DAFs face higher expropriation risk, taxable accounts are exposed to a greater risk of &lt;a href=&quot;https://forum.effectivealtruism.org/posts/jG8pptGksBpzyTxYg/a-qualitative-analysis-of-value-drift-in-ea-1&quot;&gt;value drift&lt;/a&gt;. If I invest my money in an unrestricted account, I might be tempted to withdraw the money and spend it on myself. If I put the money in a DAF, I have no choice but to give it to charity (although there’s still some risk that I’ll give it to a &lt;a href=&quot;https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF/purchase-fuzzies-and-utilons-separately&quot;&gt;“warm fuzzy”&lt;/a&gt; charity instead of a highly effective one).&lt;/p&gt;

&lt;p&gt;How big of a concern is value drift? Ben Todd wrote an &lt;a href=&quot;https://forum.effectivealtruism.org/posts/eRQe4kkkH2pPzqvam/review-of-value-drift-estimates-and-several-new-estimates&quot;&gt;EA Forum post&lt;/a&gt; cataloging various attempts to estimate the rate of value drift among effective altruists. We don’t have great data, but based on what we know, it seems reasonable to assume about a 5% value drift rate among EAs in general, and a rate of 2-3% among the most committed EAs (which probably includes most of the people reading this).&lt;/p&gt;

&lt;p&gt;(Strictly speaking, value drift isn’t a loss of capital because you still have the money. But if the money isn’t directed toward an altruistic end, then the money is as good as lost for altruistic purposes.)&lt;/p&gt;

&lt;p&gt;How about DAF expropriation? According to &lt;a href=&quot;https://www.metaculus.com/questions/4584/what--of-money-held-in-all-us-dafs-will-be-expropriated-by-2100/&quot;&gt;a Metaculus prediction&lt;/a&gt;, DAF providers have about a 0.2% annual probability of expropriating funds, but this estimate probably isn’t very reliable. Sandberg (n.d.)&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; estimated that nations face about a 0.5% annual probability of failing, and it seems implausible that DAFs are more than twice as stable as nations. We might roughly estimate the annual probability of DAF expropriation at 1%. An annual probability of 2-3% (on par with the rate of value drift) seems too high, although not totally implausible.&lt;/p&gt;

&lt;p&gt;Overall, DAFs appear to have a lower probability of complete loss of capital.&lt;/p&gt;

&lt;h2 id=&quot;investment-performance&quot;&gt;Investment performance&lt;/h2&gt;

&lt;p&gt;Unless they have a good reason not to, investors should want to diversify their portfolios as much as possible. A fully diversified portfolio includes every asset class in the world. But for our purposes, it probably makes sense to just consider investing in stocks and bonds. Other types of assets are harder to buy, and it’s not obvious when buying them is a good idea. Rather than wade into a discussion of which asset classes to include, let’s stick with stocks and bonds.&lt;/p&gt;

&lt;p&gt;Vanguard, Schwab, and Fidelity DAFs all have the ability to invest in the total world stock market and in US bonds. Naturally, an unrestricted account can also invest in these asset classes, such as by buying ETFs like &lt;a href=&quot;https://www.etf.com/VT&quot;&gt;VT&lt;/a&gt; and &lt;a href=&quot;https://www.etf.com/BND&quot;&gt;BND&lt;/a&gt;. So if we want to hold the global market, DAFs and taxable accounts are equally suitable. (But if we wanted to hold some other asset class like gold, we probably couldn’t do that with a DAF.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;)&lt;/p&gt;

&lt;p&gt;Altruists might prefer to invest with &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/&quot;&gt;leverage&lt;/a&gt;. DAFs normally can’t use leverage, but taxable accounts can. This is a clear advantage of taxable accounts.&lt;/p&gt;

&lt;p&gt;How much leverage should we use? Given altruists’ risk tolerance, it might be reasonable for the overall altruistic portfolio to use something like 2:1 to 3:1 leverage (although it’s hard to say with confidence, this is just a rough guess). If an investor believes the altruistic portfolio (excluding their own money) already has the correct amount of leverage, then they should use the same amount. If they believe the altruistic portfolio has either too much or too little leverage, then they should compensate by doing the opposite. (I’m inclined to believe most altruists don’t use enough.)&lt;/p&gt;

&lt;h2 id=&quot;combining-the-factors&quot;&gt;Combining the factors&lt;/h2&gt;

&lt;p&gt;So far, we’ve talked about three factors: fixed costs (i.e., taxes and fees), risk of complete loss of capital, and investment performance. Now we want to combine them to decide whether we prefer a taxable account or a DAF. We need to figure out how each of the three factors impacts the expected value of an investment portfolio. To do that, we must answer two questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;How bad is (say) a 2% chance of losing all our money, compared to a 100% chance of losing 2%?&lt;/li&gt;
  &lt;li&gt;If our investments go up, presumably that means other altruists’ investments also went up. How much does that matter?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s start with the assumption that we’re a small donor. We don’t have enough money to substantially impact a cause on our own. That means our last dollar spent does about as much good as our first dollar. So the answer to the first question is easy: a 2% chance of losing everything is approximately as bad as a 100% chance of losing 2%. (For a self-interested investor, or for the altruistic community as a whole, this definitely isn’t true—a 2% chance of bankruptcy is much worse than a guaranteed 2% loss.)&lt;/p&gt;

&lt;p&gt;Next, how do market movements impact the value of our portfolio? When we gain or lose money, so do most other altruistic investors. When the market goes up, other altruists become wealthier, so our dollars become less valuable. And the opposite is true when the market goes down. There’s no simple answer to how much this matters, but we can figure it out with some math, which I cover in the &lt;a href=&quot;#appendix-how-to-calculate-expected-utility&quot;&gt;Appendix&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the reader’s convenience, I created a &lt;a href=&quot;http://mdickens.me/DAF-vs-taxable-calculator&quot;&gt;DAF vs. taxable calculator&lt;/a&gt; that finds the expected utility of a DAF vs. a taxable account based on the factors discussed in this essay, and allows you to input your own estimates.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Using the &lt;a href=&quot;http://mdickens.me/DAF-vs-taxable-calculator&quot;&gt;calculator&lt;/a&gt; I created, we can draw some tentative conclusions about whether to use a DAF or a taxable account.&lt;/p&gt;

&lt;p&gt;If I don’t believe altruists should use leverage, or if I don’t have the ability to use leverage in a taxable account, then a DAF seems better for me. The biggest consideration is value drift. A 2-3% chance per year of value drift probably outweighs DAF fees and the risk that a DAF is expropriated.&lt;/p&gt;

&lt;p&gt;If instead I do believe altruists should use substantial leverage (on the order of 2:1 or higher), then the ability to get leverage probably overwhelms all the other factors, and I should keep my money in a taxable account.&lt;/p&gt;

&lt;h3 id=&quot;advanced-topic-uncorrelated-investments&quot;&gt;Advanced topic: Uncorrelated investments&lt;/h3&gt;

&lt;p&gt;In &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;a previous essay&lt;/a&gt;, I observed that altruistic investors should prefer to minimize their correlation with other altruists’ portfolios. But this seems hard to do. I &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#donors-can-decrease-correlation&quot;&gt;explored&lt;/a&gt; a few possibilities, and found that managed futures looked promising as an uncommon investment with low correlation to stocks and bonds.&lt;/p&gt;

&lt;p&gt;DAFs generally can’t hold alternative investments like managed futures.&lt;sup id=&quot;fnref:5:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; Investors can only hold something like that in a taxable account. Low-correlation investments are so valuable that this probably overwhelms all other considerations (that’s assuming managed futures do in fact provide uncorrelated positive return, or that we can find some other investment that does).&lt;/p&gt;

&lt;h3 id=&quot;other-considerations&quot;&gt;Other considerations&lt;/h3&gt;

&lt;p&gt;There are some other things worth considering that I haven’t mentioned yet, including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Tax rates or DAF fees could change in the future.&lt;/li&gt;
  &lt;li&gt;You can always choose to convert a taxable account into a DAF, but you can’t do the reverse. So in that sense, a taxable account has more option value.&lt;/li&gt;
  &lt;li&gt;A DAF can only give to legally registered charities. If you believe that, say, funding an early-stage startup is the best use of your money, then you can’t do that if you put your money in a DAF.&lt;/li&gt;
  &lt;li&gt;If you change your mind about how to invest, a DAF lets you sell your holdings tax-free, while a taxable account does not. On the other hand, DAFs only allow limited investment options, so your new favorite strategy might not be possible in a DAF.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition, I could be missing other important considerations that would change the result of the analysis.&lt;/p&gt;

&lt;h1 id=&quot;appendix-how-to-calculate-expected-utility&quot;&gt;Appendix: How to calculate expected utility&lt;/h1&gt;

&lt;p&gt;Consider an altruistic investor Alice. Assume Alice’s investments are perfectly correlated with the overall altruistic portfolio. Further assume that her favored cause exhibits &lt;a href=&quot;https://en.wikipedia.org/wiki/Isoelastic_utility&quot;&gt;constant relative risk aversion&lt;/a&gt;. Marginal utility with respect to total altruistic spending \(x\) is given by&lt;/p&gt;

&lt;p&gt;\begin{align}
\frac{du(x)}{dx} = x^{-\eta}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\eta\) is the coefficient of relative risk aversion.&lt;/p&gt;

&lt;p&gt;Small donors have nearly linear utility of spending. Therefore, the expected utility of Alice’s portfolio approximately equals the size of her portfolio multiplied by the marginal utility of the cause (as defined above).&lt;/p&gt;

&lt;p&gt;Let \(\pi\) be the proportion of Alice’s portfolio that she invests in risky assets (\(\pi &amp;gt; 1\) means she uses leverage). For a particular market return \(r\), gross utility after one year (ignoring all factors other than market performance) equals&lt;/p&gt;

&lt;p&gt;\begin{align}
(x e^r)^{-\eta} \cdot e^{\pi r} = x^{-\eta} e^{r (\pi - \eta)}
\end{align}&lt;/p&gt;

&lt;p&gt;The \(x^{-\eta}\) factor is a constant with respect to Alice’s portfolio, so we can ignore it.&lt;/p&gt;

&lt;p&gt;Next, we need to account for the other factors considered in this essay. Alice’s portfolio must pay some fixed cost, determined either by fees on a DAF or taxes on dividends, depending on the type of account. (For simplicity, I’m assuming the market has a constant dividend yield. This isn’t quite true, but it is true that dividends tend to fluctuate much less than market prices do, so it’s a reasonable assumption.) So the size of her portfolio after one year is not \(e^{\pi r}\), but \(e^{\pi r} \cdot (1 - \text{fixed cost})\).&lt;/p&gt;

&lt;p&gt;Then consider the risk of ruin, due to either expropriation or value drift. In the event of a complete loss of capital, Alice’s portfolio can produce zero utility. Therefore, we should discount the expected utility of her portfolio by the probability of complete loss of capital.&lt;/p&gt;

&lt;p&gt;Finally, we need to convert the utility of a particular market return into an expected utility over all possible market returns. Assume the market follows a &lt;a href=&quot;https://en.wikipedia.org/wiki/Log-normal_distribution&quot;&gt;log-normal distribution&lt;/a&gt; parameterized by \(\mu\) and \(\sigma\). The expected value of a log-normal distribution is \(e^{\mu + \sigma^2/2}\). Therefore, the expected value of \(e^{r (\pi - \eta)}\) is given by&lt;/p&gt;

&lt;p&gt;\begin{align}
e^{(\pi - \eta) \mu + (\pi - \eta)^2 \sigma^2 / 2}
\end{align}&lt;/p&gt;

&lt;p&gt;Accounting for fixed costs and risk of ruin gives a final expected utility of&lt;/p&gt;

&lt;p&gt;\begin{align}
e^{(\pi - \eta) \mu + (\pi - \eta)^2 \sigma^2 / 2} \cdot (1 - \text{fixed cost}) \cdot (1 - \text{risk of ruin})
\end{align}&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://www.nasdaq.com/articles/qualified-vs-unqualified-dividends-2012-10-26&quot;&gt;Qualified dividends&lt;/a&gt; are taxed as capital gains, while unqualified dividends are taxed as income. Dividends usually count as qualified, but some types of companies such as real estate investment trusts (REITs) pay out unqualified dividends. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Over time, companies have begun paying out lower dividends and replacing them with buybacks, which are more tax-efficient. If companies eventually only return capital to shareholders via buybacks, then investors won’t have to pay any taxes at all as long as they never sell any shares. But it seems plausible that at that point, the government will change the rules around buybacks to make them taxable somehow. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Sandberg (n.d.). Everything is transitory, for sufficiently large values of “transitory.” &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;DAFs may typically only invest in a pre-selected set of mutual funds. &lt;a href=&quot;https://www.schwabcharitable.org/public/file/P-8085389&quot;&gt;Schwab&lt;/a&gt; and &lt;a href=&quot;https://www.fidelitycharitable.org/content/dam/fc-public/docs/programs/ciap-investment-policies-and-guidelines.pdf&quot;&gt;Fidelity&lt;/a&gt; allow DAF owners with over $250,000 to nominate an investment advisor to manage their DAF. But (1) they explicitly forbid using leverage, and (2) they require the portfolio to track an established benchmark, which means you probably can’t put a significant fraction of the portfolio into assets other than stocks and bonds. That said, I have not seriously looked into this, so there might be a way to get more flexible investments through a DAF. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:5:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Added this line on 2021-03-18. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>The Risk of Concentrating Wealth in a Single Asset</title>
				<pubDate>Sun, 18 Oct 2020 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2020/10/18/risk_of_concentrating/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/10/18/risk_of_concentrating/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Last updated 2022-04-29.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Highly likely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Some people hold most or all of their wealth in a single asset. A few examples of common situations where people do this:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Alice works at Google. A large chunk of her compensation comes in the form of Google stock.&lt;/li&gt;
  &lt;li&gt;Bob bought bitcoin a few years ago and the price went up a lot. Now, bitcoin accounts for 50% of his net worth.&lt;/li&gt;
  &lt;li&gt;Carol used her retirement money to buy a second house, and she earns income by renting it out.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is usually a bad idea, and you should go to great lengths to avoid it. It’s bad &lt;em&gt;even if&lt;/em&gt; you have high risk tolerance, because you can get a better expected return by building a diversified portfolio and then adding &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/&quot;&gt;leverage&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;key-points&quot;&gt;Key points&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;On average, an individual stock provides the same expected return as the total stock market, but with 2-3 times as much risk. You could apply leverage to a total stock market index and get 2-3 times the expected return for the same level of risk as an individual stock. Or you could add leverage to the global market portfolio and get 3-4 times the expected return. &lt;a href=&quot;#how-risky-are-individual-stocks&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;The same principle applies to other types of individual assets, such as private company stock, rental properties, and cryptocurrency. &lt;a href=&quot;#assets-other-than-stocks&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;If you hold a lot of money in a single asset and want to diversify, you may have to pay capital gains tax when you sell. The diversification benefits probably overcome the tax hit after about 3-10 years. &lt;a href=&quot;#the-impact-of-taxes&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;You can reduce taxes by donating to charity, or by putting the money in a donor-advised fund or a foundation. &lt;a href=&quot;#reducing-taxes-by-donating&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;If you’re a major stakeholder in an asset, you might depress the price by selling. You can reduce your market impact by selling slowly, or by paying your broker to manage the sale for you. &lt;a href=&quot;#major-stakeholders-care-about-market-impact&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;An exception: Altruistic investors don’t just care about their own investments, but about the overall altruistic portfolio. If you hold an individual asset that other altruists can’t hold, such as equity in a private company, it might make sense to keep it. &lt;a href=&quot;#possible-exceptions&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;If the effective altruism community holds on the order of $30 billion in overly concentrated investments, then diversifying these funds could be worth as much as $1 billion in additional donations per year. &lt;a href=&quot;#expected-value-of-diversifying&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: In this essay, I make some illustrations using historical market data. Future market behavior will probably look different, so the exact numbers I use won’t apply. But I expect the basic principles to remain true in the future.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: I am not an investment advisor and this should not be taken as investment advice.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/TXgrYjW2D9JbgbTk7/the-risk-of-concentrating-wealth-in-a-single-asset&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#key-points&quot; id=&quot;markdown-toc-key-points&quot;&gt;Key points&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#a-primer-on-risk&quot; id=&quot;markdown-toc-a-primer-on-risk&quot;&gt;A primer on risk&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#how-risky-are-individual-stocks&quot; id=&quot;markdown-toc-how-risky-are-individual-stocks&quot;&gt;How risky are individual stocks?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#assets-other-than-stocks&quot; id=&quot;markdown-toc-assets-other-than-stocks&quot;&gt;Assets other than stocks&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#the-impact-of-taxes&quot; id=&quot;markdown-toc-the-impact-of-taxes&quot;&gt;The impact of taxes&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#reducing-taxes-by-donating&quot; id=&quot;markdown-toc-reducing-taxes-by-donating&quot;&gt;Reducing taxes by donating&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#major-stakeholders-care-about-market-impact&quot; id=&quot;markdown-toc-major-stakeholders-care-about-market-impact&quot;&gt;Major stakeholders care about market impact&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#possible-exceptions&quot; id=&quot;markdown-toc-possible-exceptions&quot;&gt;Possible exceptions&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#expected-value-of-diversifying&quot; id=&quot;markdown-toc-expected-value-of-diversifying&quot;&gt;Expected value of diversifying&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgements&quot; id=&quot;markdown-toc-acknowledgements&quot;&gt;Acknowledgements&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#changelog&quot; id=&quot;markdown-toc-changelog&quot;&gt;Changelog&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;a-primer-on-risk&quot;&gt;A primer on risk&lt;/h2&gt;

&lt;p&gt;All else equal, investors prefer to reduce risk. We normally measure risk in terms of &lt;a href=&quot;https://en.wikipedia.org/wiki/Standard_deviation&quot;&gt;standard deviation&lt;/a&gt;: if asset A has twice the standard deviation of asset B, then A is twice as risky.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;An investor who wants to increase risk can apply &lt;a href=&quot;https://en.wikipedia.org/wiki/Leverage_(finance)&quot;&gt;leverage&lt;/a&gt;: borrowing money to invest. This amplifies the expected return and standard deviation of their portfolio. Specifically, both quantities increase in proportion to the amount of leverage used.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; For example, using 3:1 leverage—borrowing money until you have 3x as much as you started with—would triple the expected return (minus the cost of leverage) and triple the standard deviation.&lt;/p&gt;

&lt;p&gt;If we can freely use leverage,&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; we don’t care about an asset’s return or standard deviation. What we care about is the &lt;a href=&quot;https://www.investopedia.com/terms/s/sharperatio.asp&quot;&gt;Sharpe ratio&lt;/a&gt;: the expected return minus the risk-free rate, divided by the standard deviation. If we want to take on greater risk, we can add leverage to increase both the return and the volatility. If we think our portfolio is too risky, we can mix in some &lt;a href=&quot;https://www.investopedia.com/terms/t/treasurybill.asp&quot;&gt;T-bills&lt;/a&gt;. Either way, the Sharpe ratio stays the same.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Therefore, when comparing investment choices, we should focus on the Sharpe ratios. In theory, we should invest in whatever portfolio has the highest Sharpe ratio, and then add leverage or mix in T-bills to hit the desired level of risk (although we can’t always do this due to practical constraints). As we will see in the next two sections, we can expect a well-diversified global portfolio to provide a higher Sharpe ratio than almost any individual asset.&lt;/p&gt;

&lt;h2 id=&quot;how-risky-are-individual-stocks&quot;&gt;How risky are individual stocks?&lt;/h2&gt;

&lt;p&gt;I couldn’t find any published research on the standard deviations of individual stocks. So I did my own analysis.&lt;/p&gt;

&lt;p&gt;Using the Center for Research in Security Prices (CRSP) data set, I calculated the month-to-month standard deviation of every US stock from 1960 to 2015.&lt;/p&gt;

&lt;p&gt;I divided stocks into categories by market capitalization, and calculated the average stock’s standard deviation within each category. I defined the categories as follows:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Micro-caps: smallest 40% of stocks&lt;/li&gt;
  &lt;li&gt;Small-caps: 40th to 70th percentile&lt;/li&gt;
  &lt;li&gt;Mid-caps: 70th to 90th percentile&lt;/li&gt;
  &lt;li&gt;Large-caps: 90th to 99th percentile&lt;/li&gt;
  &lt;li&gt;Mega-caps: 99th percentile and above&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This table gives the average volatility of stocks within each category, and the market cap of the smallest stock in the category as of 2015.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Standard Deviation&lt;/th&gt;
      &lt;th&gt;Min Size&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Micro&lt;/td&gt;
      &lt;td&gt;64%&lt;/td&gt;
      &lt;td&gt;$0&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Small&lt;/td&gt;
      &lt;td&gt;48%&lt;/td&gt;
      &lt;td&gt;$310M&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Mid&lt;/td&gt;
      &lt;td&gt;42%&lt;/td&gt;
      &lt;td&gt;$1.6B&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Large&lt;/td&gt;
      &lt;td&gt;40%&lt;/td&gt;
      &lt;td&gt;$8.3B&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Mega&lt;/td&gt;
      &lt;td&gt;27%&lt;/td&gt;
      &lt;td&gt;$75B&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;This graph shows the standard deviation by market cap of stocks from 1995 to 2015:&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/stdev-by-market-cap.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The US stock market has historically had a standard deviation of about 16%. By comparing to the data above, we can see that large-cap stocks experienced about double the volatility of the total market, and small-caps had triple the volatility. Mega-caps were more stable than other stocks, but still showed greater volatility than the total market.&lt;/p&gt;

&lt;p&gt;We can corroborate this finding by looking at &lt;a href=&quot;http://www.cboe.com/products/vix-index-volatility/volatility-indexes&quot;&gt;CBOE volatility indexes&lt;/a&gt;. These provide the market’s best guess as to the future volatility of various assets. The VIX gives the projected volatility of the S&amp;amp;P 500; VXAZN, VXAPL, VXGS, VXGOG, and VXIBM project the volatilities of five select stocks (Amazon, Apple, Goldman Sachs, Google, and IBM, respectively). The exact relationships vary over time, but usually, the individual-stock volatility indexes are about twice as high as the VIX.&lt;/p&gt;

&lt;p&gt;So far, we’ve seen that the US stock market as a whole is much less volatile than an individual stock. But it’s usually &lt;a href=&quot;https://www.bridgewater.com/research-and-insights/geographic-diversification-can-be-a-lifesaver-yet-most-portfolios-are-highly-geographically-concentrated&quot;&gt;not a good idea to invest only in US stocks&lt;/a&gt;. We can reduce volatility even lower by investing in the &lt;a href=&quot;https://www.forbes.com/sites/phildemuth/2014/07/30/meet-the-global-market-portfolio-the-optimal-portfolio-for-the-average-investor/&quot;&gt;global market portfolio&lt;/a&gt; (GMP). How does individual-stock volatility compare to the GMP?&lt;/p&gt;

&lt;p&gt;We can compare the US stock market to a fully diversified portfolio using data from Meb Faber’s book, &lt;a href=&quot;https://www.amazon.com/gp/product/B00TYY3F3C/&quot;&gt;Global Asset Allocation&lt;/a&gt;. The book reports performance from 1973 to 2013:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;T-Bills&lt;/th&gt;
      &lt;th&gt;US Large-Cap&lt;/th&gt;
      &lt;th&gt;GMP&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Return&lt;/td&gt;
      &lt;td&gt;5.3%&lt;/td&gt;
      &lt;td&gt;10.2%&lt;/td&gt;
      &lt;td&gt;9.9%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Standard Deviation&lt;/td&gt;
      &lt;td&gt;1.0%&lt;/td&gt;
      &lt;td&gt;15.6%&lt;/td&gt;
      &lt;td&gt;8.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Sharpe Ratio&lt;/td&gt;
      &lt;td&gt;0&lt;/td&gt;
      &lt;td&gt;0.32&lt;/td&gt;
      &lt;td&gt;0.55&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;US large-cap stocks (think the S&amp;amp;P 500) had a Sharpe ratio of 0.32. The global market portfolio—which includes global stocks, bonds, and real assets—had a Sharpe ratio nearly twice as high, at 0.55.&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; Putting this together with the data on individual-stock volatility tells us that individual large-cap stocks were three to four times as risky as GMP, without a meaningfully higher return to compensate. Small-cap stocks were even riskier, although small-caps also tended to earn higher returns, so they had comparable Sharpe ratios to large-caps.&lt;/p&gt;

&lt;p&gt;Individual large-cap stocks on average had only 1/3 the Sharpe ratio of the GMP. What does that mean in practical terms?&lt;/p&gt;

&lt;p&gt;Suppose it’s 1973, and you’re comfortable with a lot of risk—your target is a 30% standard deviation. You could put all your money in a single stock (like Exxon or IBM) and earn a 10% average return over the next 40 years. Or you could buy the GMP, lever it up 3:1, and earn 19% after fees.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; If you invest $100 in the single stock, you will have $4,500 by the end of 2013.&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; That same $100 in the global market portfolio will turn into $114,000.&lt;/p&gt;

&lt;h3 id=&quot;assets-other-than-stocks&quot;&gt;Assets other than stocks&lt;/h3&gt;

&lt;p&gt;The same basic principle applies for single assets other than publicly-traded stock, such as houses, cryptocurrencies, or private equity. Individual assets are far more volatile than the global market portfolio.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;We don’t really know how volatile individual house prices are, because most houses only get bought or sold once every few years. There’s &lt;a href=&quot;https://rhsfinancial.com/2019/05/01/better-investment-stocks-real-estate/&quot;&gt;mixed evidence&lt;/a&gt; on how well the aggregate real estate market performs compared to the stock market, but my best guess is real estate performs as well or worse than equities after adjusting for risk. And buying a single property brings idiosyncratic risks, such as property damage or changes in demand in that particular city (or even neighborhood).&lt;/p&gt;

&lt;p&gt;(This is not to say it’s a bad idea to buy a house if you want to live in it. I’m specifically talking about buying real estate as an investment.)&lt;/p&gt;

&lt;p&gt;We do know how volatile cryptocurrencies are: they’re &lt;a href=&quot;https://www.buybitcoinworldwide.com/volatility-index/&quot;&gt;a lot more volatile than individual stocks&lt;/a&gt;.&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; So far, bitcoin has justified its volatility with high returns: from 2012 to the end of 2019, it had a Sharpe ratio of 4.1, far higher than that of the global market portfolio. If you expect it to continue returning an average of 300% per year, as it has since 2012, then it might be a good investment. But that’s a pretty bold expectation, and I wouldn’t count on it.&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;On the subject of private equity, there’s some evidence that &lt;a href=&quot;https://www.cambridgeassociates.com/wp-content/uploads/2018/10/WEB-2018-Q2-USVC-Benchmark-Book.pdf&quot;&gt;venture capital firms outperform public markets&lt;/a&gt;.&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt; But this outperformance comes with greater risk: three quarters of entrepreneurs end up with nothing. After adjusting for risk, startup founders on average earn less than they would at a salaried job.&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;the-impact-of-taxes&quot;&gt;The impact of taxes&lt;/h2&gt;

&lt;p&gt;Suppose you have a lot of money in a single stock, and you want to diversify. But if you sell, you’ll have to pay tax on the capital gains. How much does that matter?&lt;/p&gt;

&lt;p&gt;As of 2020, most people in the United States &lt;a href=&quot;https://www.irs.gov/taxtopics/tc409&quot;&gt;have to pay 15% tax&lt;/a&gt; on capital gains. People in the top tax bracket have to pay 20%. Additionally, you may have to pay more if your state charges capital gains taxes. California residents in the top tax bracket have to pay &lt;a href=&quot;https://pocketsense.com/capital-gains-taxes-state-california-8405869.html&quot;&gt;13.3%&lt;/a&gt; in state capital gains tax, for a total of 33.3%. (Apologies to readers outside the US; I don’t know how your taxes work.)&lt;/p&gt;

&lt;p&gt;Let’s compare two hypothetical options:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Keep holding the single stock, earning an expected 10% return without paying taxes.&lt;/li&gt;
  &lt;li&gt;Sell the stock, pay capital gains tax, and invest in the global market portfolio with 3:1 leverage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you invest for long enough, option 2 will earn enough (in expectation) to overcome the cost of paying taxes. At a 15% tax rate, this only takes 1.9 years (in expectation). At a 33.3% rate, it takes 4.7 years. If you’re a long-term investor, it looks appealing to take the tax hit in exchange for higher expected return.&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;But most people probably don’t want 3:1 leverage. What if you invest at, say, 1.6:1 leverage instead? With that amount of leverage, the global market portfolio historically had about the same volatility as the (unlevered) S&amp;amp;P 500.&lt;/p&gt;

&lt;p&gt;Before we can answer which option is better, we have to resolve a problem. The GMP with 1.6:1 leverage has lower risk than a single stock, so just looking at the return doesn’t fully capture why we prefer the GMP. How can we compare them?&lt;/p&gt;

&lt;p&gt;The solution is to use &lt;a href=&quot;https://www.investopedia.com/terms/c/certaintyequivalent.asp&quot;&gt;certainty-equivalent interest rates&lt;/a&gt;. If you want 1.6:1 leverage on the global market portfolio, then your certainty-equivalent interest rate is 9.1% (calculated using historical data from 1973 to 2013).&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; That means you’re indifferent between holding the GMP at 1.6:1 leverage and holding an asset that pays out a guaranteed 9.1% per year—which, at the 1973-2013 average risk-free rate of 5.3%, gives an excess return of 3.8% over the risk-free rate. (The risk-free rate today is much lower, so the forward-looking certainty-equivalent interest rate would be lower, too. The 9.1% figure only applies historically.)&lt;/p&gt;

&lt;p&gt;Historically, an individual stock had a certainty-equivalent interest rate of 5.4% (0.1% plus the historical risk-free rate of 5.3%). Therefore, holding the GMP rather than the single stock was as good as a guaranteed 3.7% annual return (= 9.1% – 5.4%, or 3.8% – 0.1% if you exclude the risk-free rate). At a 3.7% interest rate, it takes 4.4 years to make up for a 15% capital gains tax, and 11.1 years to recover from a 33.3% tax. 11.1 years is a long time, but many investors have longer time horizons than that.&lt;/p&gt;

&lt;h3 id=&quot;reducing-taxes-by-donating&quot;&gt;Reducing taxes by donating&lt;/h3&gt;

&lt;p&gt;If you ultimately plan to donate your savings, you can do three main things to reduce your tax burden:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Donate your stock immediately instead of selling it.&lt;/li&gt;
  &lt;li&gt;Put your stock in a &lt;a href=&quot;https://en.wikipedia.org/wiki/Donor-advised_fund&quot;&gt;donor-advised fund&lt;/a&gt; (DAF).&lt;/li&gt;
  &lt;li&gt;Set up a &lt;a href=&quot;https://en.wikipedia.org/wiki/Private_foundation&quot;&gt;foundation&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first option makes sense if you want to donate now. If you want to donate later rather than now, then it doesn’t work for you. But you might still be able to put your stock in a DAF. You can deduct donations to a DAF, and assets you hold inside a DAF don’t incur capital gains tax. But DAFs have three important downsides:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;DAFs can only disburse funds to legally registered charities.&lt;/li&gt;
  &lt;li&gt;DAFs charge fees, usually 0.6% per year.&lt;/li&gt;
  &lt;li&gt;DAFs typically only let you invest in a few select mutual funds.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The annual fee means the longer you hold money in a DAF, the more money you lose to the fee. But if you keep your money in a taxable account, you have to pay taxes on dividends, which might cost you more than 0.6% anyway—in which case the DAF performs better by avoiding those taxes.&lt;/p&gt;

&lt;p&gt;DAFs usually only allow you to invest in a dozen or so pre-selected mutual funds. You can use the available funds to construct a portfolio of global stocks and bonds, but you might not be able to buy alternative assets like gold. And if you want to do something more sophisticated like &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;leverage&lt;/a&gt; or &lt;a href=&quot;https://forum.effectivealtruism.org/posts/iZp7TtZdFyW8eT5dA/a-generalized-strategy-of-mission-hedging-investing-in-evil&quot;&gt;mission hedging&lt;/a&gt;, you’re probably out of luck.&lt;/p&gt;

&lt;p&gt;If you set up a foundation to hold your money, then the foundation is free to invest however you’d like. But foundations have relatively high fixed costs, so it probably only makes sense to create one if you want to donate a lot of money—perhaps somewhere around $10 million. (I don’t know much about foundations, that’s just my order-of-magnitude guess after reading a few articles about how they work.)&lt;/p&gt;

&lt;p&gt;But be aware that US donors &lt;a href=&quot;https://www.irs.gov/publications/p526#en_US_2019_publink100017780&quot;&gt;can only deduct 30% of their adjusted gross income&lt;/a&gt; (AGI) when donating appreciated securities. For example, if your income is $100,000, then you can’t deduct more than $30,000 worth of stock. If you donate more than that (including by putting it in a DAF), then you only get to deduct the first $30,000. Foundations have stricter limits—donors can only deduct 20% of AGI when donating to a foundation. Under some circumstances, it might still make sense to make a non-deductible contribution to a DAF, because any assets inside the DAF will not incur taxes if they gain value.&lt;/p&gt;

&lt;p&gt;Much more could be said about how DAFs and foundations work. This section is just meant to provide a starting point. I am by no means an expert on taxes; if you plan on incurring a large capital gain, you should first talk to a tax professional.&lt;/p&gt;

&lt;h2 id=&quot;major-stakeholders-care-about-market-impact&quot;&gt;Major stakeholders care about market impact&lt;/h2&gt;

&lt;p&gt;If you’re a major stakeholder in an asset, e.g., if you hold a significant fraction of the stock in a publicly-traded company, then you might not want to sell all your holdings at once. If you try to sell too quickly, that pushes down the price, and you end up selling at a substantial discount. That’s not good.&lt;/p&gt;

&lt;p&gt;You can still sell large holdings as long as you’re careful. Most brokerage firms can assist large investors with making transactions (for example, &lt;a href=&quot;https://www.fidelity.com/trading/active-trader-trading-service-levels&quot;&gt;Fidelity’s active trader services&lt;/a&gt;). Interactive Brokers has an automated algorithm to do this (called &lt;a href=&quot;https://www1.interactivebrokers.com/en/index.php?f=1088&quot;&gt;accumulate/distribute&lt;/a&gt;); I don’t know much about it personally, but an investment professional I trust told me that it works well.&lt;/p&gt;

&lt;p&gt;Investors who want to make their own trades can use a simple approach to reduce their market impact (although this probably isn’t sufficient for truly large transactions):&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Use limit orders, not market orders.&lt;/li&gt;
  &lt;li&gt;When putting in a sell order, only slightly undercut the current best offer.&lt;/li&gt;
  &lt;li&gt;Be patient. Be willing to wait several days to finish trading.&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;possible-exceptions&quot;&gt;Possible exceptions&lt;/h2&gt;

&lt;p&gt;In general, it’s not a good idea to hold most of your net worth in a single asset. There is at least one exception: if you’re an altruistic investor and you hold an asset that few other altruists own.&lt;/p&gt;

&lt;p&gt;Self-interested people only care about how much money they have, but altruists care about how much money all other (value-aligned) altruists have. As an altruist, rather than focusing on your personal portfolio, you want to optimize the overall altruistic portfolio.&lt;/p&gt;

&lt;p&gt;In some situations, it might make sense to hold most of your money in a single asset. For example, if you’re a founder or employee at a small private company, you might be the only value-aligned altruist who can hold equity in that company. In that case, you might prefer to hold on to that equity instead of selling. Even though your personal portfolio is highly concentrated, you’re actually diversifying the overall altruistic portfolio by investing in an asset when nobody else can (as long as your holding doesn’t represent too large a percentage of the altruistic portfolio).&lt;/p&gt;

&lt;p&gt;As far as I know, that’s the only situation where concentrating your investments in one asset can actually reduce risk. But there are some other situations where you might be willing to accept concentration risk in exchange for some other benefit. For example:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;You put most of your wealth in a house because you want to own your own home.&lt;/li&gt;
  &lt;li&gt;You’re the founder of a company, and you want to control a majority of the voting shares.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sometimes, considerations like these might matter enough to outweigh the extra risk. But people usually underestimate the risk involved in concentrating their wealth, and they should diversify more often than they do.&lt;/p&gt;

&lt;h2 id=&quot;expected-value-of-diversifying&quot;&gt;Expected value of diversifying&lt;/h2&gt;

&lt;p&gt;The effective altruism community probably has on the order of $30 billion in concentrated investments. If we collectively shifted all this money toward a globally diversified portfolio, historically that would have been worth a certainty-equivalent 3.7% return, or just over $1 billion per year. Of course, these numbers are rough estimates. And a lot of funds are probably concentrated in US equities, which isn’t ideal but still isn’t as bad as holding just a single stock. Ultimately, it seems likely that diversifying the collective effective altruist portfolio could be worth $500 million or more per year in extra donations.&lt;/p&gt;

&lt;h1 id=&quot;acknowledgements&quot;&gt;Acknowledgements&lt;/h1&gt;

&lt;p&gt;Thanks to Peter Hurford, David Reinstein, and Gavin Taylor for reviewing drafts of this essay.&lt;/p&gt;

&lt;h1 id=&quot;changelog&quot;&gt;Changelog&lt;/h1&gt;

&lt;p&gt;2022-04-20:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Update expected value of diversifying based on how &lt;a href=&quot;https://forum.effectivealtruism.org/posts/zA6AnNnYBwuokF8kB/is-effective-altruism-growing-an-update-on-the-stock-of&quot;&gt;EA funding has grown&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Improve the standard deviation by market cap chart to be more readable.&lt;/li&gt;
  &lt;li&gt;Update footnote to reflect the changes to the aforementioned chart.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2022-04-29: Update “Key points” to reflect the changes from 2022-04-20.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;William Bernstein’s short book &lt;a href=&quot;https://www.amazon.com.au/Deep-Risk-History-Portfolio-Investing-ebook/dp/B00EV25GAM&quot;&gt;Deep Risk&lt;/a&gt; explains how to think about risk beyond just standard deviation. He defines “deep risk” as the risk of permanent loss of capital.&lt;/p&gt;

      &lt;p&gt;Individual assets carry much greater deep risk than diversified portfolios because they’re much more likely to go to $0. But it’s hard to quantify how much this matters. For the purposes of this essay, we will use standard deviation as the measure of risk because it’s easy to reason about while still doing a decent job of capturing what we mean by “risk”. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, mean return doesn’t increase in proportion to the amount of leverage. It increases in proportion to the amount of leverage minus the cost of leverage. If the cost of leverage equals the risk-free rate, then increasing leverage by a factor of N increases the return in excess of the risk-free rate by a factor of N.&lt;/p&gt;

      &lt;p&gt;As of this writing, risk-free rates are close to zero, so it’s reasonable to ignore the risk-free rate when making rough estimates. As we will see later in this essay, the risk-free rate used to be much higher, so we need to account for it when looking at historical market behavior. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Theoretical models assume investors can use however much leverage they want, and only have to pay the risk-free rate. That’s not true in practice, but sophisticated investors can usually get leverage without having to pay too much. In &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/#Possible_leverage_methods&quot;&gt;Should Altruists Leverage Investments?&lt;/a&gt;, Brian Tomasik reviews a few methods for applying leverage. I personally get leverage via a margin loan from Interactive Brokers, although this might not be best for everyone. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Again, this isn’t true in practice, because you probably have to pay more than the risk-free rate to get leverage. So adding leverage usually decreases the Sharpe ratio by a bit. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;$0 is the lower bound on how small a micro-cap can be. In 2015, the smallest publicly-traded company was valued at a little under $100,000. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This graph includes 20 data points, where each point shows the average standard deviation for stocks in a given market cap range. All ranges are equally sized on a log scale (each range includes market caps between 2&lt;sup&gt;n&lt;/sup&gt; and 2&lt;sup&gt;n+1&lt;/sup&gt; for some value of n). For this graph, I included data from 1995-2015 instead of 1960-2015 to avoid concerns around market cap growth over time. A more rigorous analysis would still adjust for market cap growth, so this graph shouldn’t be considered entirely accurate, but it’s reasonably close. I didn’t try to make this adjustment because it’s not entirely obvious what the proper way to do it is.&lt;/p&gt;

      &lt;p&gt;Technically, the x-axis labels are wrong because they label ranges starting with a power of 2, e.g., the point labeled “100K” includes all stocks with a market cap between $131,072 (2&lt;sup&gt;17&lt;/sup&gt;) and $262,144 (2&lt;sup&gt;18&lt;/sup&gt;). &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bonds performed unusually well from 1973 to 2013 (compared to their average performance over the past two centuries), which means we can probably expect a worse return from the global market portfolio going forward. But US stocks also performed unusually well over this period. We can’t say how exactly the relationship between these two portfolios will change in the future, but we have good theoretical and empirical reasons to expect the global market portfolio to perform better on a risk-adjusted basis than the US stock market.&lt;/p&gt;

      &lt;p&gt;(The good performance of bonds is somewhat accounted for by a high risk-free rate, which we subtract out of return when calculating the Sharpe ratio.) &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This assumes you pay the risk-free rate as a cost of leverage. If you pay the risk-free rate plus 0.5% (which is what Interactive Brokers charges for large account holders), you’d make 18% instead of 19%. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;That’s how much you would have on average. In actuality, you probably would have done much better or much worse. Many public companies went bankrupt over the period. But you might have gotten lucky and picked one of the big winners. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Government bonds are an exception to this. Arguably, gold is an exception as well, depending on what “far more volatile” means. Gold is riskier than the global market portfolio, but probably less than twice as risky. Another possible exception is Berkshire Hathaway. But there aren’t many exceptions. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The linked web page gives implied daily volatility instead of annual. Multiply it by the square root of 365 to get the implied annual standard deviation. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;On the other hand, I thought bitcoin was a bad investment back in 2011, so what do I know. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Cambridge Associates (2018). US Venture Capital Index and Selected Benchmark Statistics. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hall &amp;amp; Woodward (2010). &lt;a href=&quot;https://web.stanford.edu/~rehall/Hall-Woodward%20on%20entrepreneurship.pdf&quot;&gt;The Burden of the Nondiversifiable Risk of Entrepreneurship.&lt;/a&gt; &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, you actually don’t care about the expected number of years until you have more money with option 2. What you care about is the expected number of years until you have more &lt;em&gt;utility&lt;/em&gt; of money. We can calculate this using certainty-equivalent interest rates, which I will do later in this section. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I derived this number as follows:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Assume an &lt;a href=&quot;https://en.wikipedia.org/wiki/Isoelastic_utility&quot;&gt;isoelastic utility function&lt;/a&gt; with a relative risk aversion coefficient of 4.&lt;/li&gt;
        &lt;li&gt;Using Merton’s formula from &lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Lifetime Portfolio Selectin Under Uncertainty: The Continuous-Time Case&lt;/a&gt; (1969), this gives optimal leverage of 1.6:1.&lt;/li&gt;
        &lt;li&gt;Calculate expected utility of the GMP at 1.6:1 leverage.&lt;/li&gt;
        &lt;li&gt;Invert the utility function to find the constant interest rate that provides that much utility.&lt;/li&gt;
      &lt;/ol&gt;

      &lt;p&gt;Calculations were performed using &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/leverage.py&quot;&gt;leverage.py&lt;/a&gt;.&lt;/p&gt;

      &lt;p&gt;As a side note, it seems unlikely to me that altruistic investors would want to use a relative risk aversion coefficient of 4. Another way we might decide to use 1.6:1 leverage by making more pessimistic assumptions about future return. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The longer you wait, the greater volatility your holdings might experience. There’s a theoretical model that defines how to trade off transaction costs vs. volatility risk: see Almgren &amp;amp; Chriss (2000), &lt;a href=&quot;https://www.smallake.kr/wp-content/uploads/2016/03/optliq.pdf&quot;&gt;Optimal Execution of Portfolio Transactions.&lt;/a&gt; &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>"Disappointing Futures" Might Be As Important As Existential Risks</title>
				<pubDate>Wed, 02 Sep 2020 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2020/09/02/disappointing-futures/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/09/02/disappointing-futures/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;&lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Possible.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Perhaps the most concerning risk to civilization is that we continue to exist for millennia and nothing particularly bad happens, but that we never come close to achieving our potential—that is, we end up in a “disappointing future.” &lt;a href=&quot;#introduction&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;A disappointing future might occur if, for example: we never leave the solar system; wild animal suffering continues; or we never saturate the universe with maximally flourishing beings. &lt;a href=&quot;#examples-of-disappointing-futures&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;In comparison to civilization’s potential, a disappointing future would be nearly as bad as an existential catastrophe (and possibly worse).&lt;/li&gt;
  &lt;li&gt;We can make several plausible arguments for why disappointing futures might occur.  &lt;a href=&quot;#why-we-should-be-concerned-about-disappointing-futures&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;According to a survey of quantitative predictions, disappointing futures appear roughly as likely as existential catastrophes. &lt;a href=&quot;#comparing-disappointing-futures-to-x-risks&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Preventing disappointing futures seems less tractable than reducing existential risk, but there are some things we might be able to do. &lt;a href=&quot;#what-might-reduce-the-risk-of-disappointing-futures&quot;&gt;[More]&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/9AYmbh25eKLojeQGe/disappointing-futures-might-be-as-important-as-existential&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#examples-of-disappointing-futures&quot; id=&quot;markdown-toc-examples-of-disappointing-futures&quot;&gt;Examples of disappointing futures&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#we-never-leave-the-solar-system&quot; id=&quot;markdown-toc-we-never-leave-the-solar-system&quot;&gt;We never leave the solar system&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#required-philosophical-premises&quot; id=&quot;markdown-toc-required-philosophical-premises&quot;&gt;Required philosophical premises&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#massive-nonhuman-animal-suffering-continues&quot; id=&quot;markdown-toc-massive-nonhuman-animal-suffering-continues&quot;&gt;Massive nonhuman animal suffering continues&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#required-philosophical-premises-1&quot; id=&quot;markdown-toc-required-philosophical-premises-1&quot;&gt;Required philosophical premises&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#we-never-saturate-the-universe-with-maximally-flourishing-beings&quot; id=&quot;markdown-toc-we-never-saturate-the-universe-with-maximally-flourishing-beings&quot;&gt;We never saturate the universe with maximally flourishing beings&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#required-philosophical-premises-2&quot; id=&quot;markdown-toc-required-philosophical-premises-2&quot;&gt;Required philosophical premises&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#why-we-should-be-concerned-about-disappointing-futures&quot; id=&quot;markdown-toc-why-we-should-be-concerned-about-disappointing-futures&quot;&gt;Why we should be concerned about disappointing futures&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-from-the-difficulty-of-mutually-beneficial-trades&quot; id=&quot;markdown-toc-argument-from-the-difficulty-of-mutually-beneficial-trades&quot;&gt;Argument from the difficulty of mutually beneficial trades&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#the-expanding-circle&quot; id=&quot;markdown-toc-the-expanding-circle&quot;&gt;The expanding circle&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#why-might-civilization-eventually-benefit-non-participants&quot; id=&quot;markdown-toc-why-might-civilization-eventually-benefit-non-participants&quot;&gt;Why might civilization eventually benefit non-participants?&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-from-the-rarity-of-values-realization&quot; id=&quot;markdown-toc-argument-from-the-rarity-of-values-realization&quot;&gt;Argument from the rarity of values realization&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-from-the-impossibility-of-reflective-equilibrium&quot; id=&quot;markdown-toc-argument-from-the-impossibility-of-reflective-equilibrium&quot;&gt;Argument from the impossibility of reflective equilibrium&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-from-technological-hurdles&quot; id=&quot;markdown-toc-argument-from-technological-hurdles&quot;&gt;Argument from technological hurdles&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-from-natural-selection&quot; id=&quot;markdown-toc-argument-from-natural-selection&quot;&gt;Argument from natural selection&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-from-status-quo&quot; id=&quot;markdown-toc-argument-from-status-quo&quot;&gt;Argument from status quo&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#comparing-disappointing-futures-to-x-risks&quot; id=&quot;markdown-toc-comparing-disappointing-futures-to-x-risks&quot;&gt;Comparing disappointing futures to x-risks&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#probability-of-an-existential-catastrophe&quot; id=&quot;markdown-toc-probability-of-an-existential-catastrophe&quot;&gt;Probability of an existential catastrophe&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#probability-of-a-disappointing-future&quot; id=&quot;markdown-toc-probability-of-a-disappointing-future&quot;&gt;Probability of a disappointing future&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#what-might-reduce-the-risk-of-disappointing-futures&quot; id=&quot;markdown-toc-what-might-reduce-the-risk-of-disappointing-futures&quot;&gt;What might reduce the risk of disappointing futures?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#tractability&quot; id=&quot;markdown-toc-tractability&quot;&gt;Tractability&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#my-subjective-probability-estimates&quot; id=&quot;markdown-toc-my-subjective-probability-estimates&quot;&gt;My subjective probability estimates&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgments&quot; id=&quot;markdown-toc-acknowledgments&quot;&gt;Acknowledgments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;As defined by Toby Ord in &lt;a href=&quot;https://theprecipice.com/&quot;&gt;The Precipice&lt;/a&gt;, “An existential catastrophe is the destruction of humanity’s longterm potential.” Relatedly, a &lt;strong&gt;disappointing future&lt;/strong&gt; is when humans do not go extinct and civilization does not collapse or fall into a dystopia, but civilization&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; nonetheless never realizes its potential.&lt;/p&gt;

&lt;p&gt;The most salient (although perhaps not the most probable) example of a disappointing future: civilization continues to exist in essentially the same form that it has for the past few hundred years or so. If we extrapolate from civilization’s current trajectory, we might expect the long-run future to have these features:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The human population size stabilizes at around 10 billion.&lt;/li&gt;
  &lt;li&gt;Global poverty ceases to exist, and all humans become wealthy by today’s standards.&lt;/li&gt;
  &lt;li&gt;Scientific and societal advances make people somewhat happier, but not transformatively so.&lt;/li&gt;
  &lt;li&gt;Humans continue not to care about wild animals’ welfare. Wild animal suffering continues to &lt;a href=&quot;https://longtermrisk.org/the-importance-of-wild-animal-suffering/#Summary_of_why_suffering_probably_dominates_happiness&quot;&gt;massively dominate human happiness&lt;/a&gt;, such that sentient life as a whole experiences more suffering than happiness.&lt;/li&gt;
  &lt;li&gt;We never populate other planets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Call this the “naively extrapolated future”. We could certainly extrapolate other plausible futures from civilization’s current trajectory—for example, if we extrapolate the &lt;a href=&quot;https://www.utilitarian.net/singer/by/199704--.htm&quot;&gt;expanding circle&lt;/a&gt; of moral concern, we might predict that future humans will care much more about animals’ welfare. And if humans become sufficiently technologically powerful, we might decide to end wild animal suffering. I don’t actually believe the naively extrapolated future is the most plausible outcome—more on that later—but I do think if you asked most people what they expect the world to look like a thousand years from now, they’d predict something like it.&lt;/p&gt;

&lt;p&gt;A note on definitions: Under the definitions of “existential catastrophe” and “disappointing future”, it’s debatable whether a disappointing future counts as a type of existential catastrophe. But when people talk about interventions to reduce existential risk, they almost always focus on near-term extinction events or global catastrophes. If civilization putters around on earth for a few billion years and then goes extinct when the sun expands, that would qualify as a disappointing future. It might &lt;em&gt;technically&lt;/em&gt; count as an existential catastrophe, but it’s not what people usually mean by the term. If we want to reduce the risk of a disappointing future, we might want to focus on things other than typical x-risk interventions. Therefore, it’s useful to treat disappointing futures as distinct from existential catastrophes.&lt;/p&gt;

&lt;p&gt;In this essay, I argue that disappointing futures appear comparably important to existential catastrophes. I do &lt;em&gt;not&lt;/em&gt; attempt to argue that effective altruists should prioritize disappointing futures. In terms of the scale/neglectedness/tractability framework, this essay focuses primarily on scale (although I will touch on tractability).&lt;/p&gt;

&lt;p&gt;The basic argument:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The importance of a particular future equals its goodness or badness, weighted by its probability of occurring.&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;We might prefer a disappointing future over one where an existential catastrophe occurs, but both massively under-shoot civilization’s potential. Disappointing futures and existential catastrophes look comparably bad in comparison to the best possible outcomes.&lt;/li&gt;
  &lt;li&gt;Disappointing futures are approximately as probable as existential catastrophes (to within an order of magnitude).&lt;/li&gt;
  &lt;li&gt;Therefore, disappointing futures are about as important as existential catastrophes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The rest of this essay is structured as follows:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;In &lt;a href=&quot;#examples-of-disappointing-futures&quot;&gt;“Examples of disappointing futures”&lt;/a&gt;, I describe some possible future outcomes, and why they qualify as disappointing.&lt;/li&gt;
  &lt;li&gt;In &lt;a href=&quot;#why-we-should-be-concerned-about-disappointing-futures&quot;&gt;“Why we should be concerned about disappointing futures”&lt;/a&gt;, I provide some qualitative arguments for why we might end up in a disappointing future.&lt;/li&gt;
  &lt;li&gt;In &lt;a href=&quot;#comparing-disappointing-futures-to-x-risks&quot;&gt;“Comparing disappointing futures to x-risks”&lt;/a&gt;, I compare the probability of a disappointing future to the probability of existential catastrophe, as well as their relative magnitudes.&lt;/li&gt;
  &lt;li&gt;In &lt;a href=&quot;#what-might-reduce-the-risk-of-disappointing-futures&quot;&gt;“What might reduce the risk of disappointing futures?”&lt;/a&gt;, I briefly examine what we might be able to do to avoid disappointing futures, and how tractable this seems compared to reducing x-risks.&lt;/li&gt;
  &lt;li&gt;I conclude with some of my &lt;a href=&quot;#my-subjective-probability-estimates&quot;&gt;subjective probability estimates&lt;/a&gt; on relevant predictions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first few parts of this essay can be read somewhat independently. Readers who feel they already understand the concept of disappointing futures can skip past &lt;a href=&quot;#examples-of-disappointing-futures&quot;&gt;“Examples of disappointing futures”&lt;/a&gt;, and start reading at &lt;a href=&quot;#why-we-should-be-concerned-about-disappointing-futures&quot;&gt;“Why we should be concerned about disappointing futures”&lt;/a&gt;. Readers who agree that disappointing futures matter, or who just want to see quantitative predictions, can skip forward to &lt;a href=&quot;#comparing-disappointing-futures-to-x-risks&quot;&gt;“Comparing disappointing futures to x-risks”&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;examples-of-disappointing-futures&quot;&gt;Examples of disappointing futures&lt;/h1&gt;

&lt;p&gt;My description of the naively extrapolated future paints a fairly specific picture. In this section, I will describe some disappointing futures that don’t require making as many predictions about how the future will look.&lt;/p&gt;

&lt;p&gt;For each of these possible futures, the claim that it would be bad depends on adopting at least one controversial philosophical premise. I will not attempt to fully justify these premises—entire dissertations could be (and indeed have been) written about them. But you only need to accept the premises of one of these possible futures, or of some other future I didn’t list, for my argument to work. Under each example, I will briefly describe the required premises.&lt;/p&gt;

&lt;p&gt;This is not meant to be a comprehensive list; it just includes some of the most easily-imaginable disappointing futures.&lt;/p&gt;

&lt;h2 id=&quot;we-never-leave-the-solar-system&quot;&gt;We never leave the solar system&lt;/h2&gt;

&lt;p&gt;If we spread throughout the galaxy, we could create many orders of magnitude more happy lives—perhaps on the order of a trillion times more than exist today. If we fail to do this, we reduce the value of the future by a factor of a trillion.&lt;/p&gt;

&lt;p&gt;For the sake of illustration, suppose humanity has a 1 in 3 chance of extinction, a 1 in 3 chance of surviving but staying on earth, and a 1 in 3 chance of populating the galaxy. If we have the opportunity to snap our fingers and reduce the chance of one of the two bad outcomes to zero, then we prefer to eliminate the “stay on earth” outcome. And not just slightly prefer it—preventing this outcome is a trillion times better than preventing extinction. If we have the choice between reducing extinction risk to 0 with certainty, or reducing the chance of staying on earth by only 1 in a billion (from 33.3333333% to 33.3333332%), then we should still strongly prefer the latter.&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Why might humans never spread throughout the universe?&lt;/p&gt;

&lt;p&gt;We have already spread across almost all (land) regions of the earth. Given sufficient time, humans tend to populate habitable areas. However:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Humans spreading across the planet coincided with increasing population, but human population &lt;a href=&quot;https://en.wikipedia.org/wiki/Demographic_transition&quot;&gt;appears to be stabilizing&lt;/a&gt;, which reduces the value of new land.&lt;/li&gt;
  &lt;li&gt;Interstellar travel involves significant hurdles that we have never faced before. Other planets are uninhabitable, and we would have to invest massive efforts into terraforming them. Before colonizing other planets, one would expect humans to colonize Antarctica or the bottom of the ocean (or &lt;a href=&quot;https://slatestarcodex.com/2014/07/21/promising-the-moon/&quot;&gt;Nebraska&lt;/a&gt; for that matter), and people have not shown much interest in those.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Tyler Cowen &lt;a href=&quot;https://80000hours.org/podcast/episodes/tyler-cowen-stubborn-attachments/&quot;&gt;spoke on the 80,000 Hours Podcast&lt;/a&gt; on why he expects humanity not to populate other planets:&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;I think space is overrated.&lt;/p&gt;

  &lt;p&gt;It’s far, there are severe physical strains you’re subject to while you’re being transported, communication back and forth takes a very long time under plausible scenarios limited by the speed of light. And what’s really out there? Maybe there are exoplanets, but when you have to construct atmosphere, there’s a risk diversification argument for doing it.&lt;/p&gt;

  &lt;p&gt;But simply being under the ocean or high up in the sky or distant corners of the earth, we’re not about to run out of space or anything close to it. So I don’t really see what’s the economic reason to have something completely external, say, to the solar system.&lt;/p&gt;

  &lt;p&gt;The idea that somehow we’re going to be sitting here three million years from now, and I’ll have my galaxy and you’ll have yours, and we’re not even human anymore. It’s not even recognizable as something from science fiction. I would bet against that if we could arrange bets on it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I would take the other side of that bet, conditional on civilization surviving long enough. But Cowen’s prediction is not unreasonable.&lt;/p&gt;

&lt;h3 id=&quot;required-philosophical-premises&quot;&gt;Required philosophical premises&lt;/h3&gt;

&lt;p&gt;The badness of this scenario depends on something like the total view of population ethics with linear aggregation:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It is good to create happy people, and bad not to.&lt;/li&gt;
  &lt;li&gt;All else equal, each additional happy person matters as much as the last one.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Other views might claim that something can only be good if it is good for someone (the &lt;a href=&quot;https://en.wikipedia.org/wiki/Person-affecting_view&quot;&gt;person-affecting view&lt;/a&gt;) or that additional people have diminishing marginal value. (For a review of the most important views in population ethics, see Greaves (2017), &lt;a href=&quot;http://users.ox.ac.uk/~mert2255/papers/population_axiology.pdf&quot;&gt;Population Axiology&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;Under these other views, it is okay not to create astronomical numbers of people in the far future, so it does not much matter if civilization never leaves earth.&lt;/p&gt;

&lt;p&gt;However, to believe that existential risk matters overwhelmingly more than short-term causes, you need to adopt (something like) the total view with linear aggregation. Under non-total views, an existential catastrophe is still bad. But most justifications for prioritizing existential risk reduction focus on the overwhelming importance of the far future, which entails the total view with linear aggregation.&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; For more, see Beckstead (2013), &lt;a href=&quot;https://docs.google.com/viewer?a=v&amp;amp;pid=sites&amp;amp;srcid=ZGVmYXVsdGRvbWFpbnxuYmVja3N0ZWFkfGd4OjExNDBjZTcwNjMxMzRmZGE&quot;&gt;On the Overwhelming Importance of Shaping the Far Future&lt;/a&gt;, sections 4 and 5.&lt;/p&gt;

&lt;p&gt;If we reject either of these premises, we must also reject the overwhelming importance of shaping the far future.&lt;/p&gt;

&lt;h2 id=&quot;massive-nonhuman-animal-suffering-continues&quot;&gt;Massive nonhuman animal suffering continues&lt;/h2&gt;

&lt;p&gt;Even if humans colonize the galaxy, this could be bad if we bring wild animals along with us. &lt;a href=&quot;https://reducing-suffering.org/how-many-wild-animals-are-there/&quot;&gt;There are many more wild animals than humans&lt;/a&gt;, and especially so for small animals like insects and nematodes. Depending on assumptions about how consciousness works and on how happiness and suffering trade off, it seems likely that wild animal suffering outweighs human happiness. A galaxy filled with humans and wild animals might not merely be disappointing—it could be worse than extinction.&lt;/p&gt;

&lt;p&gt;Most people value nature, and many see a moral duty to preserve it. We might expect future humans to bring nature with them to other planets. But over the last century, human populations have increased much more rapidly than other animals’, with the exception of farmed animals; if this trend continues, the balance of welfare might tip toward positive. Even so, a future universe filled with animal suffering would contain much less net welfare than it should, thus qualifying as a disappointing future.&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; (Although a net positive future with small wild animal populations still seems less disappointing than a future where we never leave earth at all.)&lt;/p&gt;

&lt;h3 id=&quot;required-philosophical-premises-1&quot;&gt;Required philosophical premises&lt;/h3&gt;

&lt;p&gt;This scenario is only bad if you assign non-trivial moral value to non-human animals. for more on this, see Schukraft (2020), &lt;a href=&quot;https://forum.effectivealtruism.org/posts/XG86pCgqTweFsQyrd/how-to-measure-capacity-for-welfare-and-moral-status&quot;&gt;How to Measure Capacity for Welfare and Moral Status&lt;/a&gt;. The degree of badness of this scenario depends on how bad you believe wild animals’ lives are.&lt;/p&gt;

&lt;p&gt;Its badness does not depend on any controversial assumptions about population ethics, because while not all widely-held views agree that it is good to create happy beings, they do agree that it is bad to create beings who live miserable lives. In fact, under non-total views of population ethics, wild animal suffering could matter more than under the total view, because creating unhappy wild animals cannot be outweighed by creating happy humans.&lt;/p&gt;

&lt;h2 id=&quot;we-never-saturate-the-universe-with-maximally-flourishing-beings&quot;&gt;We never saturate the universe with maximally flourishing beings&lt;/h2&gt;

&lt;p&gt;If we accept something like the total view of population ethics with linear aggregation, it follows that we should enrich the universe with as much goodness as possible. That means creating maximum pleasure, or eudaimonia, or whatever it is we consider valuable.&lt;/p&gt;

&lt;p&gt;The current structure of the world uses almost all available energy on things that aren’t valuable in themselves, like heating and transportation. We could use that energy to make more happy people instead. And the beings capable of the greatest flourishing probably don’t look much like humans. By retaining physical human bodies, we would lose out on most of the possible good we could create. (David Pearce writes of the &lt;a href=&quot;https://www.paradise-engineering.com/heaven.htm&quot;&gt;hedonistic imperative&lt;/a&gt; to create beings capable of much greater heights of flourishing than we can imagine.) And if we use our resources efficiently, we could create astronomically more flourishing beings than exist today.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This vision does not depend on specific assumptions about what “flourishing” looks like. It could fit the hedonistic utilitarian idea of hedonium—matter that’s organized to efficiently produce simple but maximally happy beings, who have no functions other to experience happiness—but it could also look like something else. For almost any conception of what constitutes the good, surely we can find better ways of expressing it than with human bodies living in large, mostly-empty spaces.&lt;/p&gt;

&lt;p&gt;But humans tend to like their bodies. Many people want to see the human species continue. Some people have proposed that we leave earth relatively untouched, while filling the rest of the universe with hedonium, achieving a good compromise across multiple value systems. If we don’t do this, and simply fill the universe with a human (or post-human) civilization that uses energy about as inefficiently as we use it today, this would represent a major missed opportunity.&lt;/p&gt;

&lt;h3 id=&quot;required-philosophical-premises-2&quot;&gt;Required philosophical premises&lt;/h3&gt;

&lt;p&gt;Like “We never leave earth”, the badness of this scenario presupposes the total view of population ethics.&lt;/p&gt;

&lt;p&gt;This scenario does not depend on any particular assumption about what constitutes happiness/flourishing. The only necessary assumption is that the best possible future is far better than a reasonably good one, even one where humans populate the accessible universe.&lt;/p&gt;

&lt;h1 id=&quot;why-we-should-be-concerned-about-disappointing-futures&quot;&gt;Why we should be concerned about disappointing futures&lt;/h1&gt;

&lt;p&gt;In the past, when I’ve talked to friends about the possibility of disappointing futures, especially friends who prioritize existential risk, they’ve tended to dismiss it as too unlikely. For example, they argue that wild animal suffering will not continue because technology will advance to the point that we have the power to end wild animal suffering. And even if most people don’t care about it, all we need is that a few people care enough to make it happen. Alternatively, I have heard people argue that civilization will stop consisting of biological beings (perhaps because humans upload their brains) and wild animals will cease to exist.&lt;/p&gt;

&lt;p&gt;I believe some people are too confident in these arguments.&lt;/p&gt;

&lt;p&gt;I find them plausible, at least. But I can also see plausible arguments that we shouldn’t worry about existential risks. The case for prioritizing x-risk reduction does not depend on a strong conviction that an existential catastrophe will occur otherwise. It only requires that an existential catastrophe appears to be a reasonable possibility (and might not even require that, if you’re willing to take bets with &lt;a href=&quot;https://nickbostrom.com/papers/pascal.pdf&quot;&gt;high apparent EV but tiny probabilities of success&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Similarly, the case for caring about disappointing futures does not require that they appear likely, only that they appear to be a reasonable possibility.&lt;/p&gt;

&lt;p&gt;My best guess is that marginal work on x-risk reduction does more expected good than marginal work on disappointing futures. (More on this &lt;a href=&quot;#comparing-disappointing-futures-to-x-risks&quot;&gt;later&lt;/a&gt;.) But I do believe disappointing futures matter on roughly the same scale as x-risks (to within an order of magnitude), and have received almost no discussion. So they represent an under-investigated opportunity.&lt;/p&gt;

&lt;p&gt;Below, I present a series of arguments for why we might expect the future to be disappointing. I find all of them plausible but not strongly compelling. I can also come up with many arguments to the contrary, on why disappointing futures might not occur. My intention is to show that we cannot easily dismiss the possibility, so I will only present arguments in favor.&lt;/p&gt;

&lt;h2 id=&quot;argument-from-the-difficulty-of-mutually-beneficial-trades&quot;&gt;Argument from the difficulty of mutually beneficial trades&lt;/h2&gt;

&lt;p&gt;People’s lives have generally gotten better over time, so it seems natural to predict that eventually we will solve all major problems and ensure everyone lives a fulfilling life. This prediction, while &lt;em&gt;plausibly&lt;/em&gt; true, is certainly not &lt;em&gt;obviously&lt;/em&gt; true.&lt;/p&gt;

&lt;p&gt;So far, civilization has improved the lives of most participants because it has allowed people to make mutually beneficial trades. But it has utterly failed to improve the lives of non-participants, most notably non-existent potential humans and non-human animals. In the case of animals, civilization has in fact made their lives much worse, to an extent that perhaps outweighs all the good civilization has done for humans (at least so far).&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; And while humans’ lives have tended to get better across generations, this has happened more or less accidentally due to economic and technological development—past generations did not focus on growing the economy so they could provide value future generations. And this has only benefited existing people, not nonexistent potential people. While many people want to have children, almost nobody intrinsically cares about bringing new people into existence irrespective of kinship.&lt;/p&gt;

&lt;p&gt;Life has gotten better (for humans) through two mechanisms:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The economy allows people to gain from helping others (e.g., by developing new technology and selling it). Increasing globalization makes this process easier.&lt;/li&gt;
  &lt;li&gt;Most people have a strong evolutionary desire to reproduce, so civilization continues across generations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At a bird’s-eye view, people in society act in their own self-interest (hence the common economic concept of &lt;a href=&quot;https://en.wikipedia.org/wiki/Homo_economicus&quot;&gt;homo economicus&lt;/a&gt;). In order to avoid one of the disappointing futures described above, this must fundamentally change. To end wild animal suffering, or to create increasingly many happy people, or to create non-human sentient beings, actors must do things that provide no personal benefit. People do behave selflessly all the time, but (1) not frequently enough to be visible at the civilizational level, and (2) people almost exclusively behave altruistically toward those with whom they have a personal connection. (For example, charitable donations clearly do help people; but improvements in standard of living over the past several hundred years have come almost entirely from people behaving self-interestedly in ways that had positive externalities, not from charity.)&lt;/p&gt;

&lt;p&gt;In theory, we might achieve much larger human populations simply due to people’s evolutionary motivations to spread their genes as widely as possible. But empirically, this &lt;a href=&quot;https://en.wikipedia.org/wiki/Demographic_transition&quot;&gt;doesn’t seem to happen&lt;/a&gt;—people in wealthier societies tend to have fewer children.&lt;/p&gt;

&lt;p&gt;If civilization will someday spend substantial resources on helping non-participants such as wild animals and non-existent future people, this seems to require a fundamental change in how people behave. Such a change could occur, but a hypothesis that posits a major deviation in human behavior should be viewed with suspicion.&lt;/p&gt;

&lt;h3 id=&quot;the-expanding-circle&quot;&gt;The expanding circle&lt;/h3&gt;

&lt;p&gt;As &lt;a href=&quot;https://www.utilitarian.net/singer/by/199704--.htm&quot;&gt;Peter Singer&lt;/a&gt; and others have observed, over time, civilization has tended to widen its circle of moral concern. If we extrapolate the expanding circle, we might expect society’s concern to extend to all morally relevant beings. If so, society will probably aim to create the best possible world for all beings, averting a disappointing future. But we have &lt;a href=&quot;https://www.gwern.net/The-Narrowing-Circle&quot;&gt;some reason to doubt&lt;/a&gt; that this trend exists.&lt;/p&gt;

&lt;p&gt;What is the probability that civilization will eventually come to incorporate the interests of all morally relevant beings? This is a complex sociological question that I am not qualified to answer. My weakly-held impression is that historical instances of moral circle expansion generally only occurred when the people in power could make beneficial trades with the previously-disenfranchised individuals or with those individuals’ kin. In any case, I do not believe we can confidently claim that future society will properly recognize the interests of all morally relevant beings.&lt;/p&gt;

&lt;p&gt;More writings on this topic:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;a href=&quot;https://forum.effectivealtruism.org/posts/WF5GDjLQgLMjaXW6B/the-narrowing-circle-gwern&quot;&gt;Aaron Gertler, “The Narrowing Circle (Gwern)”&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://forum.effectivealtruism.org/posts/EXi2Nb9RQWjztgJSm/the-moral-circle-is-not-a-circle&quot;&gt;Grue_Slinky, “The Moral Circle is not a Circle”&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://forum.effectivealtruism.org/posts/M2gBGYWEQDnrPt6nb/moral-circles-degrees-dimensions-visuals&quot;&gt;Michael Aird, “Moral circles: Degrees, dimensions, visuals”&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;why-might-civilization-eventually-benefit-non-participants&quot;&gt;Why might civilization eventually benefit non-participants?&lt;/h3&gt;

&lt;p&gt;I have heard a few specific predictions on why we shouldn’t worry about moral circle expansion:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;We will develop an AI that runs something like &lt;a href=&quot;https://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition&quot;&gt;Coherent Extrapolated Volition&lt;/a&gt;, and it will figure out that we should care about wild animals, future humans, and any other beings that might matter.&lt;/li&gt;
  &lt;li&gt;A small number of altruists will single-handedly (few-handedly?) do what’s necessary to help disenfranchised groups.&lt;/li&gt;
  &lt;li&gt;There won’t be any biological beings in the far future, so there will be no more wild animal suffering.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(The third prediction only matters for wild animal suffering, not for other causes of disappointing futures.)&lt;/p&gt;

&lt;p&gt;I won’t address any of these in detail. But all of these predictions depend on fairly specific prerequisites. For example, #2 requires that these altruists have sufficient resources to substantially impact the world, and that their goals will not conflict with anyone else’s. If altruists decide to, say, eliminate predation, then they might face resistance from people who aesthetically value nature. If altruists want to build Dyson spheres to power tiny maximally-happy brains, that requires taking up lots of resources that other actors might want.&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Be wary of predictions that have specific prerequisites. Do not dismiss them entirely, but pay attention to their prerequisites.&lt;/p&gt;

&lt;h2 id=&quot;argument-from-the-rarity-of-values-realization&quot;&gt;Argument from the rarity of values realization&lt;/h2&gt;

&lt;p&gt;Even when people hold certain values, they typically fail to live up to those values except when it’s expedient. Some examples:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;In Peter Singer’s &lt;a href=&quot;https://www.utilitarian.net/singer/by/199704--.htm&quot;&gt;“drowning child” thought experiment&lt;/a&gt;, almost everyone agrees that we have a moral obligation to save the child, and that physical distance is not morally relevant. But almost nobody donates meaningful amounts of money to global poverty alleviation, even though doing so would not require any personal hardship for a large percentage of the population.&lt;/li&gt;
  &lt;li&gt;A 2015 Gallup poll found that &lt;a href=&quot;https://news.gallup.com/poll/183275/say-animals-rights-people.aspx&quot;&gt;32% believe animals should have the same rights as people&lt;/a&gt;, even though only &lt;a href=&quot;https://news.gallup.com/poll/238328/snapshot-few-americans-vegetarian-vegan.aspx&quot;&gt;5% are vegetarian&lt;/a&gt;.&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; (Surveys like this often suffer from &lt;a href=&quot;https://en.wikipedia.org/wiki/Social_desirability_bias&quot;&gt;social desirability bias&lt;/a&gt; and other issues. Nonetheless, 32% vs. 5% seems like such a large difference that it probably overwhelms any methodological flaws.)&lt;/li&gt;
  &lt;li&gt;Famously, several of America’s founding fathers (most notably Thomas Jefferson) opposed slavery even though they owned slaves. Slavery did not end in the developed world until industrialization rendered the practice largely unprofitable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In &lt;a href=&quot;https://www.utilitarianism.net/books/utilitarianism-mill/1&quot;&gt;&lt;em&gt;Utilitarianism&lt;/em&gt;&lt;/a&gt; (1863), John Stuart Mill made a similar observation:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;All social inequalities which have ceased to be considered expedient, assume the character not of simple inexpediency, but of injustice…The entire history of social improvement has been a series of transitions, by which one custom or institution after another, from being a supposed primary necessity of social existence, has passed into the rank of universally stigmatised injustice and tyranny. So it has been with the distinctions of slaves and freemen, nobles and serfs, patricians and plebeians; and so it will be, and in part already is, with the aristocracies of colour, race, and sex.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Mill paints an optimistic picture of a historical trend of improvements in social equality. But this trend also suggests some reason for pessimism: if we expect a particular injustice to remain expedient, then it seems likely to continue indefinitely.&lt;/p&gt;

&lt;p&gt;It seems highly likely that, given continuing technological and economic progress, global poverty will continue to shrink, and society will eventually replace factory farming with more humane methods of producing food. But what about the inequality between existent and non-existent beings? It seems likely that bringing happy beings into existence (outside of one’s own offspring or relatives) will never become expedient.&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;argument-from-the-impossibility-of-reflective-equilibrium&quot;&gt;Argument from the impossibility of reflective equilibrium&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://longtermrisk.org/dealing-with-moral-multiplicity/#What_about_reflective_equilibrium&quot;&gt;Brian Tomasik writes&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Reflective-equilibrium views aim to resolve ethical disagreements into a self-consistent whole that society can accept. One example is Eliezer Yudkowsky’s coherent extrapolated volition (CEV), which would involve finding convergent points of agreement among humans who know more, are more connected, are more the people they wish they were, and so on.&lt;/p&gt;

  &lt;p&gt;A main problem with this approach is that the outcome is not unique. It’s sensitive to initial inputs, perhaps heavily so. How do we decide what forms of “extrapolation” are legitimate? Every new experience changes your brain in some way or other. Which ones are allowed? Presumably reading moral-philosophy arguments is sanctioned, but giving yourself brain damage is not? How about taking psychedelic mushrooms? How about having your emotions swayed by poignant music or sweeping sermons? What if the order of presentation matters? After all, there might be primacy and recency effects. Would we try all possible permutations of the material so that your neural connections could form in different ways? What happens when – as seems to me almost inevitable – the results come out differently at the end?&lt;/p&gt;

  &lt;p&gt;There’s no unique way to idealize your moral views. There are tons of possible ways, and which one you choose is ultimately arbitrary. (Indeed, for any object-level moral view X, one possible idealization procedure is “update to view X”.) Of course, you might have meta-views over idealization procedures, and then you could find idealized idealized views (i.e., idealized views relative to idealized idealization procedures). But as you can see, we end up with infinite regress here.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If civilization undertakes a &lt;a href=&quot;https://forum.effectivealtruism.org/posts/H2zno3ggRJaph9P6c/quotes-about-the-long-reflection&quot;&gt;Long Reflection&lt;/a&gt;, it might fail to reach reflective equilibrium, or it might land on a bad set of terminal values.&lt;/p&gt;

&lt;p&gt;A counter-argument: if we do not believe civilization will identify correct values after a Long Reflection, why should we expect our current values to be correct? Determining whether we should care about this argument essentially requires solving meta-ethics. It is extremely unclear to me how much credence I should put on the outcome of a Long Reflection if it disagrees with my current values. As a result, I consider this argument (the impossibility of reflective equilibrium) fairly weak, but hard to definitively dismiss.&lt;/p&gt;

&lt;h2 id=&quot;argument-from-technological-hurdles&quot;&gt;Argument from technological hurdles&lt;/h2&gt;

&lt;p&gt;I alluded to this argument &lt;a href=&quot;#we-never-leave-the-solar-system&quot;&gt;previously&lt;/a&gt;: humans might not leave earth because doing so requires overcoming significant technological hurdles without much obvious advantage.&lt;/p&gt;

&lt;p&gt;In the past, when humans migrated to new areas, they were largely motivated by economic opportunity. Other planets do not appear to offer any such opportunities, or at least not as directly. And while migrating on earth is not easy, migrating to another planet is far more difficult.&lt;/p&gt;

&lt;p&gt;Right now we don’t have the technology to populate other planets, and we can only loosely speculate as to how we might develop it in the future. This holds especially true for interstellar travel. We might someday become sufficiently economically and technologically prosperous that interstellar colonization becomes easy, but then again, we might not.&lt;/p&gt;

&lt;p&gt;Interstellar travel, terraforming, and other exceptionally difficult problems might only be solvable by a large unified Manhattan Project-style effort. In the past, people have mustered these sorts of efforts during wartime, or in bids to achieve global dominance, or possibly due to other idiosyncratic motivations that might not repeat themselves in the future. Interstellar colonization might never happen without a major concerted push for it.&lt;/p&gt;

&lt;p&gt;Another possibility: interstellar colonization might simply be impossible no matter how advanced our technology becomes. This seems implausible, but if true, it would mean we should not prioritize trying to make it happen, because we can’t do anything about it anyway.&lt;/p&gt;

&lt;p&gt;For more, see Mike Smith on &lt;a href=&quot;https://selfawarepatterns.com/2017/07/22/the-difficulty-of-interstellar-travel-for-humans/&quot;&gt;the difficulty of interstellar travel for humans&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Suppose the ideal future contains beings that experience much greater heights of well-being than humans do. Creating such beings might require substantial technological advances that won’t occur without concerted effort. If civilization broadly agrees that this is a desirable goal, then they can probably make it happen. But if we depend on a small number of altruists to create most of the value of the future (a possibility that Paul Christiano explores in &lt;a href=&quot;https://rationalaltruist.com/2013/02/27/why-will-they-be-happy/&quot;&gt;“Why might the future be good?”&lt;/a&gt;), then they might not have sufficient resources to achieve this goal.&lt;/p&gt;

&lt;h2 id=&quot;argument-from-natural-selection&quot;&gt;Argument from natural selection&lt;/h2&gt;

&lt;p&gt;People who use their resources for altruistic purposes might get out-competed by self-interested actors. Or evolution might otherwise select for undesirable properties that make it much less likely that we will achieve a maximally good future. This could happen either with ordinary biological humans or with post-humans.&lt;/p&gt;

&lt;p&gt;Undesirable natural selection could lead to catastrophically bad futures, such as if beings who live happy lives get out-competed by those who experience substantial suffering. Or it could lead to merely disappointing outcomes, such as if selection pressures against altruism make people less willing to create new happy beings.&lt;/p&gt;

&lt;p&gt;The details of how natural selection might operate in the future can get fairly technical. I will not discuss them in detail. Nick Bostrom’s &lt;a href=&quot;https://www.nickbostrom.com/fut/evolution.html&quot;&gt;The Future of Human Evolution&lt;/a&gt; provides a more detailed perspective.&lt;/p&gt;

&lt;h2 id=&quot;argument-from-status-quo&quot;&gt;Argument from status quo&lt;/h2&gt;

&lt;p&gt;Certain predictions that effective altruists commonly make—such as that we &lt;a href=&quot;https://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/&quot;&gt;end predation&lt;/a&gt;, or that we upload all humans’ brains onto computers, or that a superintelligent AI gains control of the world—would sound very strange to most people (and I am wary of &lt;a href=&quot;https://slatestarcodex.com/2015/08/11/book-review-chronicles-of-wasted-time/&quot;&gt;unusual predictions that lots of smart people believe&lt;/a&gt;&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;). These predictions require something more than naively extrapolating from the status quo.&lt;/p&gt;

&lt;p&gt;We should assign significant probability to the hypothesis that the world of the distant future will look basically the same as the world today, at least in the ways that count.&lt;/p&gt;

&lt;p&gt;In general, perhaps due to the &lt;a href=&quot;https://en.wikipedia.org/wiki/Base_rate_fallacy&quot;&gt;base rate fallacy&lt;/a&gt;, people tend to underestimate the probability that things don’t change. Bryan Caplan has famously won &lt;a href=&quot;https://www.econlib.org/my-complete-bet-wiki/&quot;&gt;20 out of 20 bets&lt;/a&gt;, mostly by betting on the status quo (see &lt;a href=&quot;https://old.reddit.com/r/slatestarcodex/comments/a4q3dy/is_bryan_caplans_successful_betting_record_good/&quot;&gt;this reddit thread&lt;/a&gt; for relevant discussion).&lt;/p&gt;

&lt;p&gt;Perhaps we shouldn’t extrapolate from “things usually don’t change over timescales of several years” to “things don’t change over centuries.” Clearly things change dramatically over centuries. But we should use caution when making predictions about unprecedented events, such as humanity colonizing other planets, or creating non-biological beings. We should not assign too low a probability to the naively extrapolated future (as described in the introduction).&lt;/p&gt;

&lt;p&gt;Furthermore, for most of human civilization, change has occurred extremely slowly. (This is even more true if you look at the history of life on earth.) The rapid change over the last few centuries represents a big deviation from this trend. We might reasonably predict a reversion to the status quo of stability across centuries.&lt;/p&gt;

&lt;h1 id=&quot;comparing-disappointing-futures-to-x-risks&quot;&gt;Comparing disappointing futures to x-risks&lt;/h1&gt;

&lt;p&gt;We have some good reasons to expect civilization to achieve its potential. But as argued in the previous section, disappointing futures do not seem particularly unlikely. How likely are disappointing futures compared to existential catastrophes?&lt;/p&gt;

&lt;p&gt;Most people who prioritize x-risk don’t believe an existential catastrophe is overwhelmingly likely to occur. They use probabilistic reasoning: if this happened, it would be extremely bad; it doesn’t seem overwhelmingly unlikely; therefore, it’s important to work on. (I’m leaving out some steps in the argument, but that’s the basic idea.) The same reasoning applies to disappointing futures.&lt;/p&gt;

&lt;p&gt;The main argument I have heard against disappointing futures is that they probably won’t happen. But this argument also defeats x-risk as a top priority.&lt;/p&gt;

&lt;p&gt;That said, one could give two other arguments against prioritizing disappointing futures that don’t apply to x-risk:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A disappointing future is much less likely than an existential catastrophe.&lt;/li&gt;
  &lt;li&gt;Disappointing futures are harder to prevent.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I will discuss the second argument in &lt;a href=&quot;#tractability&quot;&gt;its own section&lt;/a&gt;. Let’s dive into the first argument by thinking about the probabilities of a disappointing future or existential catastrophe.&lt;/p&gt;

&lt;h2 id=&quot;probability-of-an-existential-catastrophe&quot;&gt;Probability of an existential catastrophe&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit&quot;&gt;Michael Aird’s database&lt;/a&gt; of existential risk estimates includes three estimates of the probability of existential catastrophe over timescales longer than a century:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Leslie (1996)&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;: “The probability of the human race avoiding extinction for the next five centuries is encouragingly high, perhaps as high as 70 percent.”&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.nickbostrom.com/existential/risks.html&quot;&gt;Bostrom (2002)&lt;/a&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;: “My subjective opinion is that setting [the] probability [that an existential disaster will do us in] lower than 25% would be misguided, and the best estimate may be considerably higher.”&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://theprecipice.com/&quot;&gt;Ord (2020)&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; (p. 169): “If forced to guess, I’d say there is something like a one in two chance that humanity avoids every existential catastrophe and eventually fulfills its potential: achieving something close to the best future open to us.”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A note on Ord’s estimate, because he provides the most detail: it is apparent from the context of this quote that either he believes disappointing futures have negligible probability, or that they count as a form of existential catastrophe.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;These estimates put the total probability of existential catastrophe somewhere around 25-50%, and they all agree with each other to within a factor of two. But we only have three estimates, so we should not anchor too strongly to this range.&lt;/p&gt;

&lt;p&gt;Arguably, we care about the probabilities of specific x-risks and specific disappointing futures more than we care about the aggregate probability, because we might be able to prevent specific futures (e.g., existential catastrophe due to AI) more effectively than we can prevent an entire class of futures (e.g., existential catastrophe in general). Predictions on specific x-risks over the long term appear even sparser than on x-risk in general. I only know of one directly relevant estimate: &lt;a href=&quot;https://arxiv.org/abs/1705.08807&quot;&gt;Grace et al. (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt; surveyed AI experts and found a median 5% probability that AI results in an “extremely bad (e.g human extinction)” outcome.&lt;/p&gt;

&lt;h2 id=&quot;probability-of-a-disappointing-future&quot;&gt;Probability of a disappointing future&lt;/h2&gt;

&lt;p&gt;I do not believe we can reasonably put the probability of a disappointing future at less than about 5%, which means the probabilities of disappointing future and existential catastrophe fall within an order of magnitude of each other. (I would put my subjective probability of a disappointing future at much higher than 5%, although I do not have high confidence in my estimate.) Even if x-risks matter more than disappointing futures, they do not matter overwhelmingly more.&lt;/p&gt;

&lt;p&gt;What do other people believe about disappointing futures?&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://reducing-suffering.org/summary-beliefs-values-big-questions/&quot;&gt;Brian Tomasik provides a list of probability estimates&lt;/a&gt; on big questions about the future. &lt;a href=&quot;http://www.stafforini.com/blog/what_i_believe/&quot;&gt;Pablo Stafforini gave his probabilities on the same statements&lt;/a&gt;. Three years ago, I gave &lt;a href=&quot;https://uploads.disquscdn.com/images/b3a3b7ab94525b2a25c542fa1046e05da0c25f7fd997e4930e00d9e388b64141.png&quot;&gt;my own probabilities&lt;/a&gt; on the comments under Stafforini’s post, although today I do not entirely agree with the answers I gave. I have reproduced some of the most relevant statements in the table below. Another individual &lt;a href=&quot;https://docs.google.com/spreadsheets/d/14hKOHhLTyD1DkYPTdUriRGW2oLPfvxRic1FtzUXg6Ds/edit&quot;&gt;created a Google Doc&lt;/a&gt; to allow more people to provide their own estimates, which a few people did; but nobody provided estimates on all of the statements below, so I did not include them.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Brian&lt;/th&gt;
      &lt;th&gt;Pablo&lt;/th&gt;
      &lt;th&gt;Michael&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Human-inspired colonization of space will cause net suffering if it happens&lt;/td&gt;
      &lt;td&gt;72%&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
      &lt;td&gt;40%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments)&lt;/td&gt;
      &lt;td&gt;50%&lt;/td&gt;
      &lt;td&gt;10%&lt;/td&gt;
      &lt;td&gt;20%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing&lt;/td&gt;
      &lt;td&gt;15%&lt;/td&gt;
      &lt;td&gt;8%&lt;/td&gt;
      &lt;td&gt;50%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Humans will go extinct within millions of years for some reason other than AGI&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
      &lt;td&gt;10%&lt;/td&gt;
      &lt;td&gt;75%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Regarding their credibility: Brian Tomasik has &lt;a href=&quot;https://reducing-suffering.org/how-many-wild-animals-are-there/&quot;&gt;written many excellent essays&lt;/a&gt; on relevant issues. Pablo Stafforini has a strong record on measurable predictions—as of this writing, &lt;a href=&quot;https://www.metaculus.com/accounts/profile/101341/&quot;&gt;he holds rank 16 on Metaculus&lt;/a&gt; out of hundreds (thousands?) of active users. I would consider them both far more credible on these predictions than most people.&lt;/p&gt;

&lt;p&gt;To my knowledge, those are all the public predictions on disappointing futures that have ever been made. I wanted more predictions, so I conducted a survey &lt;a href=&quot;https://www.facebook.com/groups/477649789306528/permalink/808450462893124/&quot;&gt;on the Effective Altruism Polls&lt;/a&gt; Facebook group. The survey received 12 responses—not a large sample, but much better than nothing.&lt;/p&gt;

&lt;p&gt;The survey asked the following questions:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;What is the % probability that humanity achieves its long-term potential?&lt;/li&gt;
    &lt;li&gt;What is the % probability that human civilization continues to exist into the long-term future, does not fall into a dystopia, but never achieves its potential?&lt;/li&gt;
    &lt;li&gt;What is the % probability that the world in the long-term future is better than the world today, but less than 1000 times better?&lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;I asked a fourth, optional question:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;What does “humanity achieves its long-term potential” mean to you?&lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;The responses to this fourth question suggested that most participants roughly share the views laid out in this essay as to how good humanity’s long-term potential can be.&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The second and third questions are meant to give two different interpretations of what a disappointing future means. The second question is less precise but more relevant, while the third question is less relevant but more precise.&lt;/p&gt;

&lt;p&gt;This table summarizes the responses to the three main questions:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Q1&lt;/th&gt;
      &lt;th&gt;Q2&lt;/th&gt;
      &lt;th&gt;Q3&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;mean&lt;/td&gt;
      &lt;td&gt;19%&lt;/td&gt;
      &lt;td&gt;39%&lt;/td&gt;
      &lt;td&gt;41%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;median&lt;/td&gt;
      &lt;td&gt;8%&lt;/td&gt;
      &lt;td&gt;40%&lt;/td&gt;
      &lt;td&gt;40%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;standard deviation&lt;/td&gt;
      &lt;td&gt;23%&lt;/td&gt;
      &lt;td&gt;35%&lt;/td&gt;
      &lt;td&gt;39%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Interquartile_range&quot;&gt;interquartile range&lt;/a&gt;&lt;/td&gt;
      &lt;td&gt;23%&lt;/td&gt;
      &lt;td&gt;62%&lt;/td&gt;
      &lt;td&gt;71%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Some key observations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Q1 and Q2 together imply a mean 42% probability of existential catastrophe, which is close to the mean estimated probability of a disappointing future.&lt;/li&gt;
  &lt;li&gt;Most people did not assign a particularly low probability to disappointing futures.&lt;/li&gt;
  &lt;li&gt;People’s estimates vary widely, as shown by the large standard deviation and interquartile range. On every question, at least one person answered &amp;lt;= 0.1%, and at least one other answered &amp;gt;= 90%.&lt;/li&gt;
  &lt;li&gt;Most people gave similar probabilities on their answers to Q2 and Q3. But some people’s estimates differed substantially between the two (in both directions). This suggests people interpreted the questions in varying ways.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The mean and median estimates, as well as the high variance in answers, indicate that we should be concerned about the impact of disappointing futures on the expected value of the future.&lt;/p&gt;

&lt;p&gt;However, this survey has some important limitations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;It only received a small number of responses.&lt;/li&gt;
  &lt;li&gt;It samples from the Effective Altruism Facebook group, and this sample could be biased in various ways. But I expect that this is a much better sample than one could get from something like Mechanical Turk, because the respondents are unusually likely to have thought about longtermism, existential risk, and humanity’s potential.&lt;/li&gt;
  &lt;li&gt;The survey questions leave substantial room for interpretation, so we don’t know exactly what people meant by their responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;what-might-reduce-the-risk-of-disappointing-futures&quot;&gt;What might reduce the risk of disappointing futures?&lt;/h1&gt;

&lt;p&gt;Suppose we agree that disappointing futures appear likely—possibly even more likely than existential catastrophes. What can we do about it?&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;
    &lt;p&gt;Some have proposed that we can increase the probability of positive outcomes via &lt;a href=&quot;https://reducing-suffering.org/values-spreading-often-important-extinction-risk/&quot;&gt;values spreading&lt;/a&gt; / &lt;a href=&quot;https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial&quot;&gt;moral circle expansion&lt;/a&gt;. People who argue for prioritizing values spreading have sometimes relied on the assumption that the long-term future is likely to have negative expected value. But this assumption is not necessary. We only require that the ideal future is much more desirable than the “average” future, and that values spreading can meaningfully increase the probability of achieving this ideal. (Jacy Reese discusses the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial#Tractability&quot;&gt;tractability of this solution&lt;/a&gt;.)&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;We could structure incentives such that people properly account for the interests of beings who have no direct representation. &lt;a href=&quot;https://philpapers.org/rec/JOHLIR&quot;&gt;Longtermist Institutional Reform&lt;/a&gt; by MacAskill &amp;amp; John (2020) offer some proposals on how to encourage institutions to make decisions that benefit future citizens. These proposals are preliminary and &lt;a href=&quot;https://blog.jessriedel.com/2020/07/30/comments-on-longtermist-institutional-reform-by-john-macaskill/&quot;&gt;have some substantial limitations&lt;/a&gt;. And ensuring the interests of non-citizens (including animals, beings who would only exist if population increased, simulated minds, etc.) seems even more difficult. But the basic concept does appear promising. See also Dullaghan (2019), &lt;a href=&quot;https://forum.effectivealtruism.org/posts/kCkd9Mia2EmbZ3A9c/deliberation-may-improve-decision-making&quot;&gt;Deliberation May Improve Decision-Making&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Reducing global catastrophic risk might be the most effective way to decrease the probability of a disappointing future. A global catastrophe that destabilizes civilization without permanently destroying it might damage civilization’s propensity toward altruism. Nick Beckstead discusses this possibility in &lt;a href=&quot;https://blog.givewell.org/2015/08/13/the-long-term-significance-of-reducing-global-catastrophic-risks/&quot;&gt;“The long-term significance of reducing global catastrophic risks”&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;If altruistic actors build the first superintelligent AI, this could increase the chances that the AI gives proper consideration to all morally significant beings.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Some have argued that the best way to improve the long-term future is to increase economic growth (see Tyler Cowen in &lt;a href=&quot;https://80000hours.org/podcast/episodes/tyler-cowen-stubborn-attachments/&quot;&gt;an interview with 80,000 Hours&lt;/a&gt;). This seems unlikely to help us avert a disappointing future—on the margin, economic growth probably cannot (e.g.) provide sufficient resources/motivation for civilization to colonize space if it wasn’t going to happen anyway. Marginal economic growth can probably only make things happen sooner, not switch them from not-happening to happening. But it’s at least conceivable that increasing economic growth could reduce the risk of a disappointing future.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;The best way to reduce the risk of disappointing futures might be to save money. Longtermist altruists &lt;a href=&quot;https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf&quot;&gt;can behave more patiently than other actors&lt;/a&gt;. If altruists can preserve their assets over a long enough time horizon (and if non-altruists prefer not to do this), then the altruists will slowly gain power. Eventually, they can use that power to bring about an ideal world. If this plan works, it successfully avoids concerns about &lt;a href=&quot;#argument-from-the-impossibility-of-mutually-beneficial-trades&quot;&gt;the impossibility of mutually beneficial trades&lt;/a&gt;, because only altruists will inherit the savings of the previous generation’s altruists. But this requires &lt;a href=&quot;https://mdickens.me/2020/07/03/estimating_discount_rate/#expropriation-and-value-drift&quot;&gt;ensuring that the money stays in altruistic hands&lt;/a&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These ideas all pose significant challenges. This list only represents a first look at some possible solutions.&lt;/p&gt;

&lt;h2 id=&quot;tractability&quot;&gt;Tractability&lt;/h2&gt;

&lt;p&gt;Existential risks and disappointing futures appear to have similar scale. The choice of which to prioritize depends primarily on tractability. Each of the possible solutions in the previous section have their own tractability issues. We cannot say much about them in general—we need to discuss each potential solution individually. So, while much could be said about each of them, I will leave such discussion out of this essay. In the previous section, I included references to deeper analyses of each solution, some of which address tractability.&lt;/p&gt;

&lt;p&gt;I do want to raise one broad consideration on tractability. If future people do not achieve an astronomically good outcome, that means the future people who want to achieve this are unable to. If they can’t set civilization on a good trajectory, why should we expect to succeed?&lt;/p&gt;

&lt;p&gt;This seems to me like a strong argument, and pushes in favor of investing to give later as the best solution. If we expect future people to have more power to avert a disappointing future, then we should give our resources to them.&lt;/p&gt;

&lt;p&gt;But I have three counter-arguments.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Civilization might have inertia: as it grows, efforts to change its trajectory become more difficult. In that case, earlier efforts can expect to have more success, and we should act now. (However, this only applies if inertia grows more quickly than the investment rate of return. Otherwise, future beings will still be able to do more with our invested money than we can today.&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt;)&lt;/li&gt;
  &lt;li&gt;We might live at a time of unusually high leverage. People often claim this with respect to x-risk: we face an unusually high risk of extinction, and if we can get through this critical period, we should expect to survive for millennia. Something similar could be true with respect to avoiding disappointing futures, perhaps if we expect values to “lock in” at some point, or for some other reason.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://mdickens.me/2020/07/23/unknown_x-risks/&quot;&gt;Existential risk could increase over time&lt;/a&gt; or remain relatively high for a long time. And perhaps deliberate reductions in x-risk only persist for a few decades or centuries. So this argument against working on disappointing futures also makes working on x-risk look less valuable. Therefore, it does not make disappointing futures look less tractable relative to x-risk.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1 id=&quot;my-subjective-probability-estimates&quot;&gt;My subjective probability estimates&lt;/h1&gt;

&lt;p&gt;Making long-term predictions is hard. I do not expect that I can make reliable predictions about the long-term future of civilization. Still, for the sake of transparency, I will provide my rough probability estimates for the main outcomes discussed in this essay.&lt;/p&gt;

&lt;p&gt;A disappointing future is more likely than an existential catastrophe: 2 in 5&lt;/p&gt;

&lt;p&gt;Marginal work now to reduce the probability of a disappointing future does more good than marginal work now on existential risk: 1 in 3&lt;/p&gt;

&lt;p&gt;Working to determine whether we should prioritize disappointing futures has better marginal expected value than working directly on existential risk: 1 in 3&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The effective altruism community under-appreciates the significance of disappointing futures: 2 in 3&lt;/p&gt;

&lt;p&gt;On long-term outcomes:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A “fast” existential catastrophe occurs: 1 in 2&lt;/li&gt;
  &lt;li&gt;Civilization lasts a long time, but is net negative or only weakly net positive: 1 in 3&lt;/li&gt;
  &lt;li&gt;Civilization does something kinda good, but not nearly as good as it could be: 1 in 6&lt;/li&gt;
  &lt;li&gt;Civilization achieves its potential: 1 in 50&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(These add up to more than 100% because I don’t want to give too much precision in my estimates.)&lt;/p&gt;

&lt;h1 id=&quot;acknowledgments&quot;&gt;Acknowledgments&lt;/h1&gt;

&lt;p&gt;Thanks to David Moss and Jason Schukraft for providing feedback on drafts of this essay.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I prefer to speak about existential risks with respect to civilization rather than humanity, for three reasons:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;We should care about all sentient life, not just humans. Use of the word “humanity” seems to implicate that only humanity matters, in a way that “civilization” doesn’t. (I do not mean to suggest that people who use the word “humanity” only care about humans, because I know that is often not the case.)&lt;/li&gt;
        &lt;li&gt;Civilization is the thing that has potential in the relevant sense, not the human species.&lt;/li&gt;
        &lt;li&gt;Future civilization might not consist (solely) of humans.&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I will not attempt to justify this premise, but it is generally accepted by longtermists. &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This assumes that decreasing the probability of one outcome increases the probability of the following outcome. So decreasing the probability of extinction increases the probability that we survive but stay on earth, without changing the probability that we populate the galaxy. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Lightly edited to condense. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some other views might still endorse the overwhelming importance of the far future. For example, a form of utilitarianism where the total utility of the world is the square root of the sums of each individual’s utilities. But the total view with linear aggregation is the most notable such view. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In a future where we successfully implement Dyson spheres of hedonium but still have wild animals for some reason, the animals’ suffering would be relatively minor compared to the hedonium, so I wouldn’t consider this a disappointing future. &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In &lt;a href=&quot;https://nickbostrom.com/astronomical/waste.html&quot;&gt;Astronomical Waste&lt;/a&gt;, Nick Bostrom estimates that the reachable universe can concurrently support about 10^38 humans. If the earth can support up to 10^15 humans at a time (which seems like a liberal upper estimate), then failing to colonize the universe inhibits our potential by a factor of 10^23. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This depends on philosophical issues around how to trade between humans’ and animals’ welfare, but that’s beyond the scope of this essay. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;At least, this is true if we look at the animals that directly interact with civilization, most of whom live in bad conditions on factory farms. Civilization might have reduced wild animal suffering on balance, but if so, only by accident. &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;http://reflectivedisequilibrium.blogspot.com/2012/09/spreading-happiness-to-stars-seems.html&quot;&gt;Carl Shulman argues&lt;/a&gt; that “[s]preading happiness to the stars seems little harder than just spreading.” But even if (say) 50% of resources are taken by Eudaimonians and the other 50% are taken by Locusts, then shifting Eudaimonian control to 50% + epsilon still has enormous value. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Additionally, according to a &lt;a href=&quot;https://www.sentienceinstitute.org/animal-farming-attitudes-survey-2017&quot;&gt;survey by Sentience Institute (2017)&lt;/a&gt;, 49% of US adults support a ban on factory farming and 33% support a ban on animal farming. This and the Gallup survey cover different populations and use different methodologies, so they aren’t directly comparable. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Over long time horizons, natural selection might favor (post-)humans who try to have as many children as possible. This might be a mechanism by which long-term populations increase greatly, although it doesn’t necessarily mean those future populations will be close to as happy as possible, or that they will efficiently convert energy into well-being. &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The “that lots of smart people believe” clause doesn’t make the predictions &lt;em&gt;less&lt;/em&gt; likely. The point is that lots of smart people believing something isn’t necessarily good evidence that it’s true, particularly with regard to predictions about complicated issues. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Leslie, J. (2002). The End of the World: the science and ethics of human extinction. Routledge. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bostrom, N. (2002). &lt;a href=&quot;https://www.nickbostrom.com/existential/risks.html&quot;&gt;Existential Risks.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ord, T. (2020). The Precipice. Hachette Books. Kindle Edition. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In the context surrounding this quote, Ord writes that he belives the probability of existential catastrophe in the next century is 1/3 the total probability. In chapter 6, he assigns a 1 in 6 chance to existential catastrophe this century, implying a 50% total probability of existential catastrophe. Thus, he believes the only two possibilities are existential catastrophe or humanity realizes its potential (which could be true by definition, if you define an existential catastrophe as merely failing to realize humanity’s potential (as opposed to defining it as a specific event that permanently prevents humanity’s potential from being realized)).&lt;/p&gt;

      &lt;p&gt;Additionally, in chapter 6, he categorizes all existential risks and lists his probability estimates. Disappointing futures can only fit under the “Other anthropogenic risks” category, which he estimates at a 1 in 50 chance in the next century. Multiplying this by 3 gives a 6% total chance.&lt;/p&gt;

      &lt;p&gt;However, this arithmetic doesn’t really make sense, because a disappointing future can’t occur in the next century. A disappointing future is the non-occurrence of humanity realizing its potential, so it doesn’t happen at any particular time. Ord does not specify his long-run probability estimates of extinction from various causes, so as an upper bound, he might believe that 100% of the existential risk after the 21st century comes from disappointing futures. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Grace, K., et al. (2017). &lt;a href=&quot;https://arxiv.org/abs/1705.08807&quot;&gt;When Will AI Exceed Human Performance? Evidence from AI Experts.&lt;/a&gt; &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For example, some of them mentioned space-faring civilization, or the possibility of creating beings capable of much greater well-being than humans. &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This is analogous to how, in the Ramsey equation, the social discount rate accounts for the rate of consumption growth. Future consumption is less valuable in proportion to how wealthy people are, much like how future efforts on trajectory change are less valuable in proportion to how much inertia civilization has.&lt;/p&gt;

      &lt;p&gt;Technically, we should discount future spending on trajectory changes at the inertia growth rate plus the &lt;a href=&quot;https://mdickens.me/2020/07/03/estimating_discount_rate/&quot;&gt;philanthropic discount rate&lt;/a&gt;. &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I would give a higher probability, but I am mildly pessimistic regarding the ability of meta-level research to solve fundamental problems in cause prioritization. If I thought cause prioritization were more tractable, I would consider it by far the most important thing to work on, given how much uncertainty we currently have. (When I speak of cause prioritization, that includes upstream issues like population ethics and decision theory.) &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Mission Hedging via Momentum Investing</title>
				<pubDate>Tue, 01 Sep 2020 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2020/09/01/mission-hedging-via-momentum/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/09/01/mission-hedging-via-momentum/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Disclaimer: This should not be taken as investment advice.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Likely.&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Hauke Hillebrandt explains &lt;strong&gt;mission hedging&lt;/strong&gt;:&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;How should a foundation whose only mission is to prevent dangerous climate change invest their endowment? Surprisingly, in order to maximize expected utility, it might use ‘mission hedging’ investment principles and invest in fossil fuel stocks. This way it has more money to give to organisations that combat climate change when more fossil fuels are burned, fossil fuel stocks go up and climate change will get particularly bad. When fewer fossil fuels are burnt and fossil fuels stocks go down - the foundation will have less money, but it does not need the money as much.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But mission hedging has a big downside: it reduces diversification, which hurts your risk-adjusted return.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The momentum premium&lt;/strong&gt;: “stocks with low returns over the last year tend to have low returns for the next few months and stocks with high past returns tend to have high future returns.”&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; Investors can take advantage of the momentum premium by buying stocks that have gone up recently. The evidence suggests that it has persistently worked in nearly every financial market in the world, and there is a reasonable expectation that it will continue to work in the future.&lt;/p&gt;

&lt;p&gt;In addition, momentum investing might provide effective mission hedging.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;Much has been written about how momentum outperforms the market—for more on this, see &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Fact-Fiction-and-Momentum-Investing&quot;&gt;Asness et al. (2014)&lt;/a&gt;&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;. I will focus on the other claim: why would momentum investing work as a mission hedge?&lt;/p&gt;

&lt;p&gt;Suppose you want to hedge against climate change by buying fossil fuel stocks. When more fossil fuels are burned, those stocks gain value and you make more money. So you want to hold fossil fuel stocks when they’re going up. And that’s exactly what momentum does—it buys stocks that are going up.&lt;/p&gt;

&lt;p&gt;Of course, ideally, you want to buy stocks &lt;em&gt;before&lt;/em&gt; the price starts rising, not after. A momentum investing strategy won’t buy fossil fuel stocks unless they’ve already been going up for a while. So momentum will never work as well as directly mission hedging by buying and holding fossil fuel stocks.&lt;/p&gt;

&lt;p&gt;But the basic principle behind mission hedging is that the worse the world gets, the more important it is for you to have money. In the worst worlds for climate change, fossil fuel stocks will likely gain value persistently for long periods of time, which means they will maintain high momentum. If they outperform the market for five years in a row, a momentum strategy will hold them for the last four—80% as good as the direct mission hedging strategy that would have held fossil fuel stocks for all five years.&lt;/p&gt;

&lt;p&gt;Momentum fails to mission hedge in worlds where fossil fuel stocks inconsistently outperform the market: for example, perhaps they outperform for a year, then underperform for the following year. A momentum strategy will only hold fossil fuel stocks for the first few months of the second year, during which they’re underperforming. So momentum will both fail to beat the market and fail to mission hedge. But in these worlds, climate change probably isn’t as bad as in the worlds where fossil fuel stocks consistently outperform.&lt;/p&gt;

&lt;p&gt;Naturally, the concept of mission hedging via momentum applies to other cause areas as well. For example, the recent rapid progress in AI development has made many people more concerned about AI safety. And at the same time, AI stocks have outperformed the market—as of this writing, the &lt;a href=&quot;https://www.etf.com/BOTZ&quot;&gt;Global X Robotics &amp;amp; Artificial Intelligence ETF (BOTZ)&lt;/a&gt; has returned 83% since January 2017, compared to 60% for the S&amp;amp;P 500. Alphabet, Inc. (GOOG), perhaps the leading company in AI development, has returned 92%.&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; A momentum strategy would have held AI stocks during most of this period.&lt;/p&gt;

&lt;p&gt;In fact, momentum sometimes works as a mission hedge even if you don’t know what you should be hedging. For any type of bad outcome, if there are companies that profit off that outcome, you can expect to make money via momentum investing during the bad times, because you will end up investing in those companies. (This only works if such companies exist, and it requires that you figure out how to spend the money correctly after you earn it.)&lt;/p&gt;

&lt;p&gt;Regarding AI, I have seen some people make a slightly different argument: they expect transformative AI to arrive sooner than most people think, so it might make sense to buy stock in AI companies (e.g., &lt;a href=&quot;https://deluks917.wordpress.com/2020/07/29/short-timelines/&quot;&gt;deluks917 (2020)&lt;/a&gt;). This isn’t exactly the same as mission hedging, but like mission hedging, it involves reducing diversification to take advantage of a particular upside. And like mission hedging, momentum investing might capture that upside without requiring any specific bets about how the future will look. If transformative AI does come soon, then it seems likely (although not guaranteed) that this will be preceded by big developments in AI, and that AI stocks will go up a lot over the preceding 12 months. If that happens, a momentum strategy will end up buying AI stocks in time to capture the benefits of transformative AI.&lt;/p&gt;

&lt;p&gt;Whether this works depends on how momentum operates. There are two basic hypotheses as to why momentum investing outperforms: (1) the market under-reacts to good news, or (2) the market exhibits a delayed over-reaction. If hypothesis 2 is correct, then AI stocks likely won’t rise much until it’s too late. But the under-reaction hypothesis appears more likely given the evidence.&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; That means if we see evidence that transformative AI is coming soon and AI stocks go up, we will have time to buy the high-momentum AI stocks before transformative AI arrives.&lt;/p&gt;

&lt;p&gt;(As of this writing, &lt;a href=&quot;https://etfsite.alphaarchitect.com/qmom/&quot;&gt;QMOM&lt;/a&gt;, a highly concentrated momentum ETF, holds 4 of the 12 US stocks on &lt;a href=&quot;https://deluks917.wordpress.com/2020/07/29/short-timelines/&quot;&gt;deluks917’s list of AI stocks&lt;/a&gt; (33%), even though it only holds 5% of US stocks in general. Disclaimer: I hold some shares of QMOM.)&lt;/p&gt;

&lt;p&gt;So momentum investing likely beats passive investing both in terms of risk-adjusted return and in terms of mission hedging. But instead of using momentum, should investors directly buy and hold mission hedge stocks? It’s not clear. To answer that, we need to know how much better direct mission hedging works at hedging bad outcomes, and also how much better of a risk-adjusted return to expect from momentum than from direct mission hedging. We don’t know the answers to either of those.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This is my confidence specifically for the claim that momentum investing is better at mission hedging than the global market portfolio. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hillebrandt, H. (2018). &lt;a href=&quot;https://forum.effectivealtruism.org/posts/iZp7TtZdFyW8eT5dA/a-generalized-strategy-of-mission-hedging-investing-in-evil&quot;&gt;A generalized strategy of ‘mission hedging’: investing in ‘evil’ to do more good.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Fama, E. &amp;amp; French, K. (2008). &lt;a href=&quot;http://schwert.ssb.rochester.edu/f532/ff_JF08.pdf&quot;&gt;Dissecting Anomalies.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Asness, C., Frazzini, A., Israel, R., &amp;amp; Moskowitz, T. (2014). &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Fact-Fiction-and-Momentum-Investing&quot;&gt;Fact, Fiction and Momentum Investing.&lt;/a&gt; &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Total return calculated using https://www.etfreplay.com/charts.aspx on 2020-08-14. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Barberis, N., Shleifer, A., &amp;amp; Vishny, R. (1998). &lt;a href=&quot;https://scholar.harvard.edu/files/shleifer/files/model_invest_sent.pdf&quot;&gt;A Model of Investor Sentiment.&lt;/a&gt; Journal of Financial Economics 49 (3): 307-343. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Giving Now vs. Later for Existential Risk: An Initial Approach</title>
				<pubDate>Fri, 28 Aug 2020 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2020/08/28/x_risk_now_or_later/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/08/28/x_risk_now_or_later/</guid>
                <description>
                  
                  
                  
                  &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;This essay presents a variety of simple models on giving now vs. later for existential risk.&lt;/li&gt;
  &lt;li&gt;On the whole, these models do not strongly favor either option. Giving now looks better under certain plausible assumptions, and giving later looks better under others.
    &lt;ul&gt;
      &lt;li&gt;On the simplest possible model with no movement growth and no external actors, giving later looks better.&lt;/li&gt;
      &lt;li&gt;Higher movement growth/external spending pushes more in favor of giving now.&lt;/li&gt;
      &lt;li&gt;If our efforts can only temporarily reduce x-risk, we should spend a proportion of our budget in each period, rather than spending or saving all of it.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;It has been argued that, because philanthropists are more patient than most actors, they should give later. This argument does not necessarily work for existential risk.&lt;/li&gt;
  &lt;li&gt;The probability of extinction has relatively little effect on when to give.&lt;/li&gt;
&lt;/ul&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Last updated 2020-09-14.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/3fmcNMrR8cktLnoYk/giving-now-vs-later-for-existential-risk-an-initial-approach&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#a-simple-approach-to-giving-now-vs-later&quot; id=&quot;markdown-toc-a-simple-approach-to-giving-now-vs-later&quot;&gt;A simple approach to giving now vs. later&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#should-we-spend-our-whole-budget-at-once&quot; id=&quot;markdown-toc-should-we-spend-our-whole-budget-at-once&quot;&gt;Should we spend our whole budget at once?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#relationship-to-the-ramsey-model&quot; id=&quot;markdown-toc-relationship-to-the-ramsey-model&quot;&gt;Relationship to the Ramsey model&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#variations&quot; id=&quot;markdown-toc-variations&quot;&gt;Variations&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#variation-well-being-increases-over-time&quot; id=&quot;markdown-toc-variation-well-being-increases-over-time&quot;&gt;Variation: Well-being increases over time&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#variation-exogenous-reduction-in-x-risk&quot; id=&quot;markdown-toc-variation-exogenous-reduction-in-x-risk&quot;&gt;Variation: Exogenous reduction in x-risk&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#variation-beyond-a-century&quot; id=&quot;markdown-toc-variation-beyond-a-century&quot;&gt;Variation: Beyond a century&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#variation-including-movement-growth&quot; id=&quot;markdown-toc-variation-including-movement-growth&quot;&gt;Variation: Including movement growth&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#variation-recurring-exogenous-spending&quot; id=&quot;markdown-toc-variation-recurring-exogenous-spending&quot;&gt;Variation: Recurring exogenous spending&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#variation-temporary-benefits-only&quot; id=&quot;markdown-toc-variation-temporary-benefits-only&quot;&gt;Variation: Temporary benefits only&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#should-we-spend-our-whole-budget-at-once-1&quot; id=&quot;markdown-toc-should-we-spend-our-whole-budget-at-once-1&quot;&gt;Should we spend our whole budget at once?&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#assumptions&quot; id=&quot;markdown-toc-assumptions&quot;&gt;Assumptions&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#future-happiness-outweighs-suffering&quot; id=&quot;markdown-toc-future-happiness-outweighs-suffering&quot;&gt;Future happiness outweighs suffering&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#we-cannot-give-to-any-other-cause&quot; id=&quot;markdown-toc-we-cannot-give-to-any-other-cause&quot;&gt;We cannot give to any other cause&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-cost-of-reducing-x-risk-grows-with-inflation&quot; id=&quot;markdown-toc-the-cost-of-reducing-x-risk-grows-with-inflation&quot;&gt;The cost of reducing x-risk grows with inflation&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#zero-discount-rate-other-than-x-risk&quot; id=&quot;markdown-toc-zero-discount-rate-other-than-x-risk&quot;&gt;Zero discount rate (other than x-risk)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#fixed-interest-rate&quot; id=&quot;markdown-toc-fixed-interest-rate&quot;&gt;Fixed interest rate&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#on-patience&quot; id=&quot;markdown-toc-on-patience&quot;&gt;On patience&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix&quot; id=&quot;markdown-toc-appendix&quot;&gt;Appendix&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-a-simple-approach-to-giving-now-vs-later&quot; id=&quot;markdown-toc-appendix-a-simple-approach-to-giving-now-vs-later&quot;&gt;Appendix A: Simple approach to giving now vs. later&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#derivation-of-the-probability-of-extinction-fracp1--x&quot; id=&quot;markdown-toc-derivation-of-the-probability-of-extinction-fracp1--x&quot;&gt;Derivation of the probability of extinction \(\frac{p}{1 + x}\)&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#expected-utility-of-giving-now-or-later&quot; id=&quot;markdown-toc-expected-utility-of-giving-now-or-later&quot;&gt;Expected utility of giving now or later&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#derivation-of-the-break-even-point-r--p&quot; id=&quot;markdown-toc-derivation-of-the-break-even-point-r--p&quot;&gt;Derivation of the break-even point \(r = p\)&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#when-to-spend-all-money-at-once&quot; id=&quot;markdown-toc-when-to-spend-all-money-at-once&quot;&gt;When to spend all money at once&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-b-model-variation-where-well-being-increases-over-time&quot; id=&quot;markdown-toc-appendix-b-model-variation-where-well-being-increases-over-time&quot;&gt;Appendix B: Model variation where well-being increases over time&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#expected-utility-of-giving-now-or-later-1&quot; id=&quot;markdown-toc-expected-utility-of-giving-now-or-later-1&quot;&gt;Expected utility of giving now or later&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#derivation-of-the-break-even-point&quot; id=&quot;markdown-toc-derivation-of-the-break-even-point&quot;&gt;Derivation of the break-even point&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-c-beyond-a-century&quot; id=&quot;markdown-toc-appendix-c-beyond-a-century&quot;&gt;Appendix C: Beyond a century&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#option-2-is-never-best&quot; id=&quot;markdown-toc-option-2-is-never-best&quot;&gt;Option 2 is never best&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-d-model-variation-where-spending-only-temporarily-reduces-x-risk&quot; id=&quot;markdown-toc-appendix-d-model-variation-where-spending-only-temporarily-reduces-x-risk&quot;&gt;Appendix D: Model variation where spending only temporarily reduces x-risk&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#optimality-of-spreading-spending-over-time&quot; id=&quot;markdown-toc-optimality-of-spreading-spending-over-time&quot;&gt;Optimality of spreading spending over time&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-e-optimal-investment-where-spending-temporarily-reduces-x-risk&quot; id=&quot;markdown-toc-appendix-e-optimal-investment-where-spending-temporarily-reduces-x-risk&quot;&gt;Appendix E: Optimal investment where spending temporarily reduces x-risk&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgments&quot; id=&quot;markdown-toc-acknowledgments&quot;&gt;Acknowledgments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;a-simple-approach-to-giving-now-vs-later&quot;&gt;A simple approach to giving now vs. later&lt;/h1&gt;

&lt;p&gt;Let’s begin by considering a highly simplified model of reality:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Every year, the world experiences some constant level of positive well-being.&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
  &lt;li&gt;And every year, it faces some probability of extinction&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, which would reduce all future well-being to zero.&lt;/li&gt;
  &lt;li&gt;If we spend money on reducing x-risk, we can permanently decrease the probability of extinction.&lt;/li&gt;
  &lt;li&gt;If we do not reduce x-risk, no one else will, and the probability will stay the same.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For simplicity, let’s say we only have two choices:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Give Now: Spend all our money now on reducing existential risk.&lt;/li&gt;
  &lt;li&gt;Give Later: Invest our money for the next 100 years, and then spend all of it on reducing existential risk.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(For our purposes, “giving” and “spending” mean the same thing—using money to do good.)&lt;/p&gt;

&lt;p&gt;How do these options compare?&lt;/p&gt;

&lt;p&gt;Giving now has the advantage that if we do reduce x-risk now, we get to enjoy a world with reduced risk for an extra century. But giving later allows us to compound our investments for a century, so we can spend much more money on x-risk reduction and therefore reduce it to a lower level.&lt;/p&gt;

&lt;p&gt;To compare these options, we need to know the value of spending on x-risk reduction. Let’s make an additional assumption: reducing x-risk by a fixed proportion requires exponentially increasing spending. Or in other words, spending on x-risk has &lt;a href=&quot;http://www.fhi.ox.ac.uk/law-of-logarithmic-returns/&quot;&gt;logarithmic return&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This means the probability of extinction equals \(\frac{p}{1 + x}\), where \(p\) gives the initial probability and \(x\) is how much we’ve spent in total (see &lt;a href=&quot;#derivation-of-the-probability-of-extinction-fracp1--x&quot;&gt;Appendix A&lt;/a&gt; for derivation). The scale of \(x\) is normalized such that one unit of spending halves x-risk.&lt;/p&gt;

&lt;p&gt;Let \(r\) be the interest rate. If we give now, we can spend \(x\) on reducing x-risk, and if we give later, we can spend \(x \cdot (1 + r)^{100}\).&lt;/p&gt;

&lt;p&gt;Thus: if we give now, the x-risk level equals \(\frac{p}{1 + x}\) from now until the end of time.&lt;/p&gt;

&lt;p&gt;In we give later, the x-risk level for the first century equals \(p\). Then for all time after the first century, it equals \(\frac{p}{1 + x \cdot (1 + r)^{100}}\).&lt;/p&gt;

&lt;p&gt;From this, we can calculate the expected utilities of giving now and giving later. See &lt;a href=&quot;#appendix-a-simple-approach-to-giving-now-vs-later&quot;&gt;Appendix A&lt;/a&gt; for the details of the calculation.&lt;/p&gt;

&lt;p&gt;When the interest rate \(r\) is high, giving later looks better. Low interest rates make giving now look preferable. The break-even point occurs approximately when \(r = p\). &lt;strong&gt;When \(r &amp;lt; p\), give now. When \(r &amp;gt; p\), give later.&lt;/strong&gt; In other words, give now if and only if the probability of extinction exceeds the interest rate.&lt;/p&gt;

&lt;p&gt;So, based on this highly simplified approach, what can we say about whether to give now or later?&lt;/p&gt;

&lt;p&gt;To answer that, we just need to know the interest rate and the probability of extinction. Michael Aird created a &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=0&quot;&gt;database of existential risk estimates&lt;/a&gt;. &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1zP2gxZZXkmF6SlGKxvqqjwViy3mcK-_YzqCM6A8pDGI/edit#gid=0&quot;&gt;My copy of this database&lt;/a&gt; converts these estimates into annual probabilities. If we naively take the average of these estimates, we get a 0.23% annual probability of extinction or existential catastrophe&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;, which means we only need to earn higher than a 0.23% real interest rate over the next century for giving later to beat giving now. The highest estimate on the list (at the time of this writing) is 0.71%, so even the most pessimistic outlook only requires that we earn more than 0.71% real interest per year to prefer giving later. (0.71% represents the most pessimistic best guess, rather than the pessimistic end of one person’s confidence interval. The high end of a well-informed confidence interval might give a worse probability than 0.71%.)&lt;/p&gt;

&lt;p&gt;This model is just meant as an initial attempt; substantial improvements could be made. But under this model, giving later pretty much always beats giving now.&lt;/p&gt;

&lt;h2 id=&quot;should-we-spend-our-whole-budget-at-once&quot;&gt;Should we spend our whole budget at once?&lt;/h2&gt;

&lt;p&gt;Update 2020-09-09: &lt;strong&gt;The original text of this section was incorrect.&lt;/strong&gt; I originally wrote that we always prefer to spend our whole budget at once. While this is true under certain conditions, it is not necessarily true.&lt;/p&gt;

&lt;p&gt;If we spend money to directly improve people’s well-being (such as by making &lt;a href=&quot;https://www.givewell.org/international/technical/programs/cash-transfers&quot;&gt;cash transfers&lt;/a&gt; to the developing world), then under certain assumptions, we want to spend the same fraction of our remaining budget in each period, as shown by &lt;a href=&quot;http://piketty.pse.ens.fr/files/Ramsey1928.pdf&quot;&gt;Ramsey (1928)&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;. Specifically, we should behave this way when money spent in one period only benefits that period, rather than providing a continuing benefit to all periods. But in the model laid out in the previous section, this assumption doesn’t hold. The model assumes that if we reduce x-risk now, this will also reduce x-risk in all future periods. Therefore, we don’t need to spend money in every period to continually push down x-risk.&lt;/p&gt;

&lt;p&gt;We want to spend our whole budget in one period if and only if the last dollar spent in that period does more good than the first dollar spent in any other period. Under the model described above, this condition holds for most reasonable input parameters. (See &lt;a href=&quot;#when-to-spend-all-money-at-once&quot;&gt;Appendix A&lt;/a&gt; for the precise conditions where this holds.)&lt;/p&gt;

&lt;p&gt;A more accurate model would allow us to spend money at any time, not just at two specific periods (now or 100 years from now). But the two-period model can at least give us a sense of the balance of how valuable it is to spend now versus later.&lt;/p&gt;

&lt;h2 id=&quot;relationship-to-the-ramsey-model&quot;&gt;Relationship to the Ramsey model&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;http://piketty.pse.ens.fr/files/Ramsey1928.pdf&quot;&gt;Ramsey (1928)&lt;/a&gt;&lt;sup id=&quot;fnref:9:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; proposed what has become the standard economic model for determining optimal spending over time. In this model:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Society consumes resources and gains some well-being as a result of this consumption.&lt;/li&gt;
  &lt;li&gt;Due to economic growth, consumption increases over time.&lt;/li&gt;
  &lt;li&gt;Future well-being is considered less valuable than present well-being, according to some discount rate.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;One reason to use a positive discount rate is that we might go extinct. If we don’t exist, we cannot experience anything of value, so we should discount the future based on this possibility. The Ramsey model assumes that actors can choose the rate of consumption but cannot affect the discount rate. My model of x-risk reduction follows the same outline as Ramsey’s model, but instead treats consumption as fixed and allows the discount rate to change via reducing x-risk. Instead of spending money to increase consumption and thus improve well-being, actors can spend money to reduce the discount rate and thus increase the probability of realizing a positive future.&lt;/p&gt;

&lt;h1 id=&quot;variations&quot;&gt;Variations&lt;/h1&gt;

&lt;p&gt;In this part, we will consider several variations on our initial simple approach and see how they give different results. Unless stated otherwise, these build off the initial model, not off each other.&lt;/p&gt;

&lt;p&gt;This list certainly does not include every interesting variation. Future work could extend these to develop more useful models.&lt;/p&gt;

&lt;h2 id=&quot;variation-well-being-increases-over-time&quot;&gt;Variation: Well-being increases over time&lt;/h2&gt;

&lt;p&gt;Originally, we assumed that the world experienced the same amount of well-being each year. This makes the math easy, but it’s not a realistic assumption. More likely, the world will get better over time as people become better off (and as population grows, depending on your views on population ethics).&lt;/p&gt;

&lt;p&gt;Let’s take a simple approach to incorporate increasing well-being into our model. Assume global well-being increases logarithmically with global wealth, and that wealth increases exponentially. Thus, well-being increases linearly over time.&lt;/p&gt;

&lt;p&gt;This new variation makes giving later look even better than it did in the first model. The break-even interest rate is always less than \(p\). (See &lt;a href=&quot;#appendix-b-model-variation-where-well-being-increases-over-time&quot;&gt;Appendix B&lt;/a&gt; for proof.) This variation relatively favors giving later because:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It values future well-being more highly.&lt;/li&gt;
  &lt;li&gt;Giving later does a better job of reducing x-risk in the distant future at the expense of letting it stay high during this century.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;variation-exogenous-reduction-in-x-risk&quot;&gt;Variation: Exogenous reduction in x-risk&lt;/h2&gt;

&lt;p&gt;Some people believe that, over the next century or so, civilization will collectively recognize the importance of x-risk and begin investing much more in reducing it. How can we incorporate this into our model? (For simplicity, let’s return to our initial assumption that well-being is constant across time.)&lt;/p&gt;

&lt;p&gt;Suppose one century from now, the world will spend some amount of money on x-risk (call this “exogenous spending”). For simplicity, we assume that this spending occurs entirely at a single moment in time. Thus, we have two options:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Spend our budget now on reducing x-risk. Then, one century from now, some amount of exogenous spending will occur.&lt;/li&gt;
  &lt;li&gt;Invest our money for the next 100 years, at which time we spend our money alongside the exogenous spending.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(We could also invest for longer than 100 years. I will address that possibility in the next section.)&lt;/p&gt;

&lt;p&gt;This variation makes the model harder to reason about because now we have more free variables. In the initial model, technically we had two free variables: the initial x-risk rate \(p\) and the scale factor for spending \(x\). The variable \(x\) is defined such that one unit of spending equals the &lt;strong&gt;halving cost&lt;/strong&gt;: the cost to halve x-risk from its initial level. (In mathematical terms, x-risk as a function of spending equals \(\frac{p}{1 + x}\), which gives \(\frac{p}{2}\) when \(x = 1\).) If the halving cost is (say) $1 billion, then we should scale the units of \(x\) such that \(x = 1\) corresponds to $1 billion of spending. But in the initial model, whether to give now or later did not depend on the halving cost; all that mattered was the initial probability \(p\). In this new variation, we care about \(p\) &lt;em&gt;and&lt;/em&gt; the halving cost &lt;em&gt;and&lt;/em&gt; the level of exogenous spending, so now we have three free variables instead of one.&lt;/p&gt;

&lt;p&gt;More free variables means it’s much harder to put precise conditions on when to give now or later. We can start with a simple observation: higher exogenous spending makes giving now look relatively better. (Or, to put it another way, as exogenous spending increases, the break-even interest rate increases as well.) The reason for this is that exogenous spending next century increases undiscounted utility by the same amount whether we give now or later. But if we give now, we’re more likely to make it to next century, so this utility increase is worth more.&lt;/p&gt;

&lt;p&gt;It’s not at all obvious how much exogenous spending to expect relative to our own spending. First of all, who is “we”, exactly, and how much money do we have? Then, how much do we expect civilization to spend on x-risk?&lt;/p&gt;

&lt;p&gt;In the absence of any clear answer, we can look at a few examples. Suppose the baseline probability of extinction is 0.2% per year, and that we have a budget of $1 billion (which very roughly represents the x-risk budget of the effective altruism community). The following two tables give the break-even interest rate for various levels of exogenous spending. The first table assumes we need to spend $10 million to halve x-risk (that is, one unit of spending is worth $10 million) and the second assumes we need to spend $100 billion. In the first case, we only need to spend 1% of the EA budget to halve x-risk, and in the second case, the entire EA budget only goes 1% of the way to halving x-risk. These show us a spectrum of how the cost to halve x-risk changes things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 1: Break-even real interest rates with $10 million halving cost&lt;/strong&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Exogenous Spending&lt;/th&gt;
      &lt;th&gt;Interest Rate&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;0.20%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$1 billion&lt;/td&gt;
      &lt;td&gt;0.37%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$10 billion&lt;/td&gt;
      &lt;td&gt;1.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$100 billion&lt;/td&gt;
      &lt;td&gt;3.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$1 trillion&lt;/td&gt;
      &lt;td&gt;5.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$10 trillion&lt;/td&gt;
      &lt;td&gt;8.0%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;strong&gt;Table 2: Break-even real interest rates with $100 billion halving cost&lt;/strong&gt;&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Exogenous Spending&lt;/th&gt;
      &lt;th&gt;Interest Rate&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;$0&lt;/td&gt;
      &lt;td&gt;0.20%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$1 billion&lt;/td&gt;
      &lt;td&gt;0.20%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$10 billion&lt;/td&gt;
      &lt;td&gt;0.22%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$100 billion&lt;/td&gt;
      &lt;td&gt;0.35%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$1 trillion&lt;/td&gt;
      &lt;td&gt;1.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$10 trillion&lt;/td&gt;
      &lt;td&gt;3.1%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$100 trillion&lt;/td&gt;
      &lt;td&gt;5.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;$1 quadrillion&lt;/td&gt;
      &lt;td&gt;7.9%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;We should expect gross world product (GWP) to be much higher in 100 years’ time, which means these dollar amounts for exogenous spending will represent a smaller fraction of GWP than they would today. We might reasonably expect real GWP to grow 1.5% per year, which corresponds to about 5x growth in a century. That’s why I included up to $1 quadrillion: even though that’s a ridiculous amount of money today, society could contribute $1 quadrillion (in 2020 dollars) to x-risk research over the next century by spending something like 5% of GWP per year.&lt;/p&gt;

&lt;p&gt;Some observations based on these tables:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A lower halving cost results in higher break-even interest rates (and thus relatively favors giving now).&lt;/li&gt;
  &lt;li&gt;The lower halving cost of $10 million requires something like $100 billion in exogenous spending before giving later looks better. ($10 billion exogenous spending gives a 1.2% break-even interest rate, which seems easily achievable; $1 trillion gives a rate of 5.5%, which might be achievable but seems a bit high.)&lt;/li&gt;
  &lt;li&gt;At a halving cost of $100 billion, we require more like $10 trillion in exogenous spending before giving now beats giving later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Based on my own beliefs about achievable interest rates, halving cost, and how much I expect civilization to spend on x-risk reduction in the future, I weakly lean toward preferring giving later according to this model. I find it somewhat implausible that we could see low enough interest rates, low enough halving cost, or high enough exogenous spending for giving now to look better. However, this is only based on my initial impression. I have not put much thought into the values of the halving cost or exogenous spending.&lt;/p&gt;

&lt;h2 id=&quot;variation-beyond-a-century&quot;&gt;Variation: Beyond a century&lt;/h2&gt;

&lt;p&gt;Previously, we assumed we had a binary choice: give now or give a century from now. Let’s take the exogenous-spending model from the previous section and extend it to say we can use our money at any point in the future. When is the best time to give?&lt;/p&gt;

&lt;p&gt;First, observe that if we anticipate no exogenous spending, it doesn’t matter when “later” is. The break-even interest rate always approximately equals \(p\).&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; At \(r = p\), giving now is as good as giving later, no matter how far in the future you go. As a consequence, if \(r &amp;gt; p\), we should postpone our spending indefinitely.&lt;/p&gt;

&lt;p&gt;The time horizon only matters when we expect nonzero exogenous spending. In that case, we have essentially three choices:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Give now.&lt;/li&gt;
  &lt;li&gt;Give when the exogenous spending occurs.&lt;/li&gt;
  &lt;li&gt;Give in the distant future.&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Under what conditions do we prefer each of these?&lt;/p&gt;

&lt;p&gt;To answer this, we need to look at how exogenous spending affects x-risk. Suppose the world spends some amount \(y\) on reducing x-risk. This reduces the probability of extinction from its initial level \(p\) to a lower level \(\frac{p}{1 + y}\).&lt;/p&gt;

&lt;p&gt;Consider the perspective of a philanthropist 100 years from now. To them, it appears that the baseline probability of extinction equals \(\frac{p}{1 + y}\) instead of \(p\), and also that halving x-risk take a lot more work than it did 100 years ago (\(1 + y\) times as much work, to be exact). This philanthropist is confronted with exactly the same choice as in our initial simple model with no exogenous spending, except that (a) instead of x-risk \(p\), they see x-risk \(\frac{p}{1 + y}\), and (b) spending is scaled down by a factor of \(1 + y\) (so for example, if \(1 + y = 100\), then $100 of spending for them would be like $1 of spending today).&lt;/p&gt;

&lt;p&gt;Recall that, in our initial model, giving now beats giving later when the interest rate \(r\) is less than the probability of extinction, and vice versa. Therefore, giving when exogenous spending occurs is better than giving in the distant future if and only if \(r &amp;lt; \frac{p}{1 + y}\).&lt;/p&gt;

&lt;p&gt;If we derive the formula for the break-even interest rate between options 1 and 2 (see &lt;a href=&quot;#option-2-is-never-best&quot;&gt;Appendix C&lt;/a&gt;), we will see that there is no situation in which we prefer option 2. According to this model, we should never give when the exogenous spending occurs. We should always either give now or in the distant future.[^6] Specifically, we should give now if and only if \(r &amp;lt; \frac{p}{1 + y}\).&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; Intuitively, there’s nothing special about the time when exogenous spending occurs, so we never want to give at that time.&lt;/p&gt;

&lt;p&gt;But if we invest for sufficiently long, we start to run into the limits of our model assumptions. According to this model, we can compound forever at some fixed rate \(r\). But if we do this, eventually we will come to control a significant fraction of the world’s wealth. Once we reach a certain level of wealth, it will longer make sense to continue postponing our spending. We might not be able to earn enough of an investment return before this point for giving later to beat giving now.&lt;/p&gt;

&lt;h2 id=&quot;variation-including-movement-growth&quot;&gt;Variation: Including movement growth&lt;/h2&gt;

&lt;p&gt;The possibility of &lt;a href=&quot;https://forum.effectivealtruism.org/posts/MnQ3N7sDFHa56hj2f/movement-building-and-investing-to-give-later&quot;&gt;movement growth&lt;/a&gt; substantially affects considerations, so we should include it in our analysis. We might be able to spend money in ways that will encourage others to spend more on x-risk, and their spending might encourage still others, producing a compounding effect. How does this change things?&lt;/p&gt;

&lt;p&gt;Let’s return to our basic model with no exogenous spending. Suppose that any spending on x-risk also encourages others to spend more. For simplicity, suppose this compounding effect only lasts for some fixed amount of time, say 100 years. (This will limit the scope of our analysis.) Does this change how much we should value giving now vs. later?&lt;/p&gt;

&lt;p&gt;Perhaps surprisingly, it hardly changes anything. To see why, assume we live in the world with no movement growth and where, at the market interest rate, we are indifferent between giving now or later. That means we don’t care whether we spend one dollar now or \((1 + r)^{100}\) dollars in a century. Now suppose our donation causes someone else to spend a little money the following year, and that for every dollar we spend, they will spend \(\mu\) dollars. That means we cause either \(\mu\) dollars of spending next year or \(\mu (1 + r)^{100}\) dollars of spending in 101 years. The comparison between these two options looks nearly identical to the initial comparison between spending $1 now or \((1 + r)^{100}\) dollars later. The only difference in this new comparison is that we’ve already spent one dollar (if we choose to give now) or \((1 + r)^{100}\) dollars (if we give later).&lt;/p&gt;

&lt;p&gt;What if, instead of compounding for a century, movement growth continues compounding forever? If the movement growth rate exceeds the interest rate, then clearly we should give now. If not, a higher movement growth rate still pushes toward giving now. It’s difficult to say more than that because even calculating the utility of giving now or later at a particular (nonzero) movement growth rate requires computing an infinite sum, and I do not know of any way to evaluate the sum in finitely many steps.&lt;/p&gt;

&lt;p&gt;(I wrote &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/x-risk-now-later.py&quot;&gt;a Python script&lt;/a&gt; to numerically determine when to give. The program suggests that a movement growth rate greater than 1% usually favors giving now. But I do not entirely trust that the optimization algorithms I used can reliably produce correct answers, so treat this number as uncertain, even given the model assumptions.)&lt;/p&gt;

&lt;p&gt;This approach assumes that spending on x-risk reduction contributes to movement growth. Perhaps instead, some types of spending increase movement growth but don’t help reduce existential risk, and other types of spending do the opposite. That would add even more complexity to the decision (we now have three choices instead of two), so I will not address it other than to mention it as a possibility.&lt;/p&gt;

&lt;h2 id=&quot;variation-recurring-exogenous-spending&quot;&gt;Variation: Recurring exogenous spending&lt;/h2&gt;

&lt;p&gt;What if, instead of some exogenous spending occurring at a single point in time, society makes ongoing efforts to reduce x-risk? And it continues these efforts indefinitely, reducing risk closer and closer to zero?&lt;/p&gt;

&lt;p&gt;In this model, the long-term future has infinite expected utility.&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; That means we can no longer compare different choices using ordinary methods, because we cannot say which of two infinities is larger. Unfortunately, with this model variation, we cannot say anything about whether to give now or later. That said, I’m optimistic that we could find a way to compare worlds with nonzero but asymptotically-diminishing discount rates.&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Alternatively, we could assume x-risk will decrease over time and approach some positive minimum rather than zero. In that case, the expected utility of the future would remain finite. It’s hard to characterize exactly how this model behaves, because it’s much more complicated than the model with point-in-time exogenous spending. Based on some &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/x-risk-now-later.py&quot;&gt;experiments with my Python script&lt;/a&gt; (under &lt;code&gt;permanent_reduction_binary_with_extra_exogenous_spending&lt;/code&gt;), it appears that this model favors giving now a little more strongly than the point-in-time exogenous spending model, but not overwhelmingly so.&lt;/p&gt;

&lt;p&gt;Due to its complexity, I will not address this variation in detail. Still, this seems like a fruitful subject for future research, because the real world will probably contain recurring exogenous spending.&lt;/p&gt;

&lt;h2 id=&quot;variation-temporary-benefits-only&quot;&gt;Variation: Temporary benefits only&lt;/h2&gt;

&lt;p&gt;What if spending on existential risk reduction does not permanently reduce x-risk? What if the benefits only last for, say, 100 years, and then x-risk reverts to its initial level?&lt;/p&gt;

&lt;p&gt;Let’s assume the following:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;We can either spend our budget now or in 100 years.&lt;/li&gt;
  &lt;li&gt;Our spending only reduces x-risk for 100 years, after which the benefit disappears.&lt;/li&gt;
  &lt;li&gt;In 100 years’ time, some exogenous spending will occur, and this spending will re-occur every 100 years so that it has a persistent effect.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Under this altered model, efforts to reduce x-risk look far less valuable in general. With the permanent-benefits assumption, the value of the future can grow arbitrarily large as the probability of extinction approaches zero. But when we can only reduce x-risk temporarily, we can at best reduce the risk to (near) zero during a single century, after which it will increase again.&lt;/p&gt;

&lt;p&gt;In general, giving now beats giving later more often under the temporary-benefits model than with permanent benefits. In either case, by giving later, you can reduce x-risk by a larger amount because you have more money. But in the permanent-benefits model, this larger x-risk reduction stays in place forever, whereas in the temporary-benefits case, it only lasts for a century.&lt;/p&gt;

&lt;p&gt;As described in the &lt;a href=&quot;#variation-exogenous-reduction-in-x-risk&quot;&gt;variation with exogenous spending&lt;/a&gt;, we should consider the scale our budget relative to the cost to halve x-risk. With temporary benefits to spending, the break-even interest rate depends on the relative size of our budget. Increasing the budget pushes in favor of giving now. For example:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;When our budget equals the halving cost, the break-even interest rate is 0.4%.&lt;/li&gt;
  &lt;li&gt;When our budget is 2.6x the halving cost, the break-even interest rate is 0.9%.&lt;/li&gt;
  &lt;li&gt;When our budget exceeds 2.7x the cost to halve x-risk, giving now beats giving later for &lt;em&gt;any&lt;/em&gt; interest rate.&lt;/li&gt;
  &lt;li&gt;Similarly, with a small budget but sufficiently large exogenous spending, giving now looks preferable to giving later for any interest rate.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s look at why giving now always beats giving later when the budget is greater than 2.7x the halving cost. Recall that this model requires us to spend all our money either now or next century. (The requirement to spend all our money at once doesn’t make much sense for this model variation, but let’s accept it for now.) So no matter how much interest we earn, the best we can do is reduce the probability of extinction to (arbitrarily close to) zero during the next century; we can’t do anything to reduce x-risk two, three, four or more centuries from now. This puts an upper limit on how much good we can do by giving later, even with infinite spending. At a certain point, higher interest rates don’t produce meaningful marginal benefits.&lt;/p&gt;

&lt;p&gt;It seems likely that the “effective altruist budget” is insufficient to halve existential risk. Therefore, if we assume no exogenous spending, we should prefer giving later to giving now. But it also seems likely that x-risk spending will increase in the future. Thus, if we adopt the temporary-benefits model, our belief about whether to give now or later almost entirely depends on how much exogenous spending to expect.&lt;/p&gt;

&lt;h3 id=&quot;should-we-spend-our-whole-budget-at-once-1&quot;&gt;Should we spend our whole budget at once?&lt;/h3&gt;

&lt;p&gt;In the previous section, I assumed that we had to spend all our money either now or in a century. But this assumption doesn’t make much sense.&lt;/p&gt;

&lt;p&gt;When we have the ability to permanently reduce x-risk, we usually want to &lt;a href=&quot;#should-we-spend-our-whole-budget-at-once&quot;&gt;use our entire budget in a single lump sum&lt;/a&gt;, either now or at some indefinite point in the future (at least according to the given model assumptions). But recall that this result depends on a specific condition: the last dollar spent in one period must do more good than the first dollar spent in any other period. This can happen in one of two ways:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Spending has linear or increasing marginal utility.&lt;/li&gt;
  &lt;li&gt;Spending has diminishing marginal utility, but it diminishes slowly enough that we still prefer to spend our entire budget in one period.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In the temporary-benefits model, utility of spending diminishes much more rapidly than in the permanent-benefits model because reducing x-risk only linearly increases the expected length of the future, rather than exponentially increasing it. And reducing x-risk becomes more costly over time. Therefore, each marginal dollar does much less than the last to increase the expected length of the future.&lt;/p&gt;

&lt;p&gt;So we prefer to spread our spending over time (see &lt;a href=&quot;#optimality-of-spreading-spending-over-time&quot;&gt;Appendix D&lt;/a&gt; for proof). In this case, our simple binary choice of give now vs. give a century from now doesn’t reflect how we should actually behave. Instead, at each point in time, we should determine the optimal proportion of our budget to use, and then invest the rest for later.&lt;/p&gt;

&lt;p&gt;It is much harder to determine exactly how much to spend in each century, so I will not attempt to do so in this essay. This would likely make a fruitful subject for future discussion, particularly because this model seems perhaps more realistic than any of the others in this essay.&lt;/p&gt;

&lt;h1 id=&quot;assumptions&quot;&gt;Assumptions&lt;/h1&gt;

&lt;p&gt;The models discussed in this essay make a lot of assumptions. I already laid out some of these explicitly when first describing my &lt;a href=&quot;#a-simple-approach-to-giving-now-vs-later&quot;&gt;basic approach&lt;/a&gt;. In this section, I will address some implicit premises. I cannot comprehensively cover every assumption made, but we should review a few of the most important ones and discuss their implications.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://piketty.pse.ens.fr/files/Ramsey1928.pdf&quot;&gt;Ramsey (1928)&lt;/a&gt;&lt;sup id=&quot;fnref:9:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; gives a longer list of assumptions made by models similar to the ones in this essay.&lt;/p&gt;

&lt;h2 id=&quot;future-happiness-outweighs-suffering&quot;&gt;Future happiness outweighs suffering&lt;/h2&gt;

&lt;p&gt;This model assumes that, conditional on non-extinction, the world will experience positive well-being in perpetuity. But &lt;a href=&quot;https://mdickens.me/2015/08/15/is_preventing_human_extinction_good/&quot;&gt;the future might not consist of positive experiences on balance&lt;/a&gt;. It looks more likely than not that the world currently experiences net negative utility thanks to &lt;a href=&quot;https://longtermrisk.org/the-importance-of-wild-animal-suffering/#Summary_of_why_suffering_probably_dominates_happiness&quot;&gt;wild animal suffering&lt;/a&gt;, and presumably has done so for as long as sentient life has existed. So the assumption of positive well-being requires either that the world is in fact net positive currently, or that we will improve things in the future such that it becomes net positive.&lt;/p&gt;

&lt;p&gt;Alternatively, even if we expect future happiness to outweigh suffering, we could claim that &lt;a href=&quot;http://centerforreducingsuffering.org/intro/&quot;&gt;suffering matters more&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;How exactly an extinction event affects the well-being of sentient life depends on the nature of the event. If it wipes out humanity while leaving most animals alive, that would end human-caused animal suffering, but allow wild animal suffering to persist. A catastrophe that wipes out all life (such as a massive asteroid impact, or perhaps a misaligned AI) would reduce utility to zero.&lt;/p&gt;

&lt;h2 id=&quot;we-cannot-give-to-any-other-cause&quot;&gt;We cannot give to any other cause&lt;/h2&gt;

&lt;p&gt;As discussed &lt;a href=&quot;#relationship-to-the-ramsey-model&quot;&gt;previously&lt;/a&gt;, the models outlined in this essay are based on the Ramsey model, but has some key differences. Most significantly, the traditional Ramsey model treats the discount rate as fixed, and decision-maker must decide how much to spend on directly improving well-being at each point in time. The model in this essay treats well-being as fixed, but allows the discount rate (that is, the probability of extinction) to vary.&lt;/p&gt;

&lt;p&gt;In real life, actors can spend money either on increasing well-being or on reducing x-risk (or on other things). A more sophisticated model would allow for a three-pronged choice: (1) invest, (2) spend on increasing well-being, or (3) spend on reducing x-risk. (Even more complex models could introduce additional choices, although at a certain point, additional complexity becomes more burdensome than enlightening.) This three-pronged choice approach might tell us something about how spending over time should look across cause areas.&lt;/p&gt;

&lt;h2 id=&quot;the-cost-of-reducing-x-risk-grows-with-inflation&quot;&gt;The cost of reducing x-risk grows with inflation&lt;/h2&gt;

&lt;p&gt;The models we used have a built-in assumption that the cost of reducing x-risk grows over time at the rate of inflation—that’s why we calculate investment return using the real interest rate rather than the nominal rate.&lt;/p&gt;

&lt;p&gt;The cost of goods grows at the rate of inflation, and x-risk reduction is sort of like a good. But the cost of labor grows with GDP. If x-risk requires employing people, and wages grow approximately with GDP, then we should use the GDP-adjusted interest rate instead of the real rate. In that case, we end up with a lower interest rate, and thus giving now looks relatively more appealing.&lt;/p&gt;

&lt;p&gt;We do already assume that money spent on reducing x-risk has logarithmic return, and GDP growth could partially drive this effect. Additionally, costs are driven by supply and demand, so if we don’t spend anything on x-risk (and no exogenous spending occurs), demand stays fixed, and the cost does not increase. But if, say, doing x-risk research requires hiring people away from other fields, then our costs will increase over time with wages even if we don’t spend any money.&lt;/p&gt;

&lt;h2 id=&quot;zero-discount-rate-other-than-x-risk&quot;&gt;Zero discount rate (other than x-risk)&lt;/h2&gt;

&lt;p&gt;This model assumes we only discount future well-being in proportion to the probability of extinction. Some altruists &lt;a href=&quot;https://mdickens.me/2020/07/03/estimating_discount_rate/&quot;&gt;might discount the future for other reasons&lt;/a&gt;, although these mostly only apply for individuals or organizations, not for society as a whole. We might adopt a pure time preference, where we simply consider the future to be less valuable than present, although most philosophers believe that a pure time preference does not have any reasonable justification.&lt;/p&gt;

&lt;p&gt;One source of discounting that does apply at the societal level is the possibility of transformative AI. If a sufficiently advanced artificial intelligence would render our efforts useless, then we should discount future well-being by the probability that a transformative AI emerges. Adding this additional discount makes giving now look relatively more appealing.&lt;/p&gt;

&lt;h2 id=&quot;fixed-interest-rate&quot;&gt;Fixed interest rate&lt;/h2&gt;

&lt;p&gt;So far, we have assumed that investors can earn interest at some risk-free rate. In practice, investors almost always prefer to take on at least some degree of risk in order to increase their expected return. How does this apply to existential risk donors?&lt;/p&gt;

&lt;p&gt;The optimal investment strategy depends on whether x-risk reductions are permanent or temporary. Under the permanent-benefits model, spending has linear utility. That means investors want to maximize expected return without regard for risk. (As mentioned previously, this seems implausible.)&lt;/p&gt;

&lt;p&gt;Under the temporary-benefits variation, spending has decreasing marginal utility (because, unlike in the permanent-benefits model, reducing x-risk does not exponentially increase the expected length of the future). When we have decreasing marginal utility, we prefer less risky investments (all else equal). In addition:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If we hold more than enough money to halve x-risk, then we should behave similarly to most investors with regard to our risk appetite.&lt;/li&gt;
  &lt;li&gt;If we have less money than the halving cost, then we prefer to take on more risk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The details of optimal investment strategy are given in &lt;a href=&quot;#appendix-e-optimal-investment-where-spending-temporarily-reduces-x-risk&quot;&gt;Appendix E&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;on-patience&quot;&gt;On patience&lt;/h1&gt;

&lt;p&gt;&lt;a href=&quot;https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf&quot;&gt;Philip Trammell has written&lt;/a&gt; and &lt;a href=&quot;https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/&quot;&gt;spoken&lt;/a&gt; on how philanthropists should think about giving now vs. later. His analysis essentially supersedes all previous work on the question among effective altruists. He offers a compelling argument that EAs should give later. I cannot do full justice to the argument, but at a high level, it goes like this (as I interpret it):&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;For society as a whole, there exists some optimal balance of consumption and investment.&lt;/li&gt;
  &lt;li&gt;The optimal rate of consumption depends on how much one discounts future welfare—a higher discount means one should consume more now, and vice versa.&lt;/li&gt;
  &lt;li&gt;Most people are “impatient”: they don’t consider future welfare as important as welfare today. (They have a pure time preference.)&lt;/li&gt;
  &lt;li&gt;Philanthropists should be “patient”. (They should not have a pure time preference.)&lt;/li&gt;
  &lt;li&gt;Therefore, from a philanthropic perspective, society will consume too much and invest too little.&lt;/li&gt;
  &lt;li&gt;Therefore, at the margin, philanthropists maximize long-term welfare by investing all their resources rather than donating now.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To put it slightly differently, most people under-value the future, so patient philanthropists can buy future welfare cheaply (so to speak).&lt;/p&gt;

&lt;p&gt;This argument is based on the &lt;a href=&quot;#relationship-to-the-ramsey-model&quot;&gt;Ramsey model&lt;/a&gt;. It assumes we do good by increasing consumption. This most obviously applies for causes like cash transfers, where altruists give money directly to poor people, who can then spend that money to improve their circumstances. It might more loosely apply to other causes such as factory farming, where charitable giving does not do good by increasing consumption per se, but it does create a direct improvement in someone’s life circumstances&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; (where “someone” could be a chicken or a fish, not just a human). As Trammell’s argument goes, we should expect to do more good by increasing consumption in the future rather than in the present.&lt;/p&gt;

&lt;p&gt;Altruistic efforts to reduce existential risk do not necessarily work the same way. Reducing x-risk does not directly improve people’s well-being; instead, it increases the chance that future people will exist at all. When thinking about giving now vs. later, we cannot use exactly the same approach.&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;How does Trammell’s argument apply to the models developed in this essay?&lt;/p&gt;

&lt;p&gt;If our actions can permanently reduce x-risk, then society should collectively spend all of its money either now or later (under these models). The fact that other actors behave impatiently does not change how philanthropists should use our resources on the margin. (In real life, we can’t spend all our money instantly. But there’s still a meaningful difference between “you should spend 5% of your altruistic resources per year” and “you should spend as quickly as possible without compromising effectiveness.”)&lt;/p&gt;

&lt;p&gt;If x-risk reductions only work temporarily, that means we should disperse our resources at some particular rate. And if most actors are impatient, they probably spend too quickly, so patient philanthropists should balance this out by investing all their money to give later.&lt;/p&gt;

&lt;p&gt;The field of existential risk might not contain many impatient actors. We do see substantial spending from society at large on some potential existential risks, such as climate change and biosecurity, but most of this funding does not specifically focus on minimizing the chance of extinction. Perhaps patient actors account for most or nearly all of the “pure” x-risk funding, in which case they should give at the optimal rate rather than giving (exclusively) later.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;For x-risk-focused philanthropists, how to use altruistic resources over time substantially depends on how spending affects x-risk and movement growth. This essay showed what results we get with various assumptions. To summarize:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If we can permanently reduce x-risk and we expect only modest exogenous spending or movement growth, then we should give later.&lt;/li&gt;
  &lt;li&gt;If instead we expect substantial exogenous spending or movement growth, then we should give now.&lt;/li&gt;
  &lt;li&gt;If efforts can only temporarily reduce x-risk, then we should give some proportion of our resources each year. We should determine our spending rate based on the interest rate, exogenous spending, and movement growth. But if most actors in the space are impatient, then we should save all our money to give later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what conclusion we draw depends on which model assumptions we find most plausible. I do not have a particularly well-informed opinion on the matter, but I’m inclined to believe that (a) some actions can have long-lasting (if not quite permanent) impacts on x-risk, (b) spending on x-risk substantially contributes to movement growth, and (c) most x-risk donors behave impatiently, not because of a pure time preference but simply because they haven’t put much thought into the optimal spending rate. I would also guess that the effect of (b) is larger than that of (c). Based on these assumptions, x-risk donors should spread their spending over time, and probably spend at a relatively high rate. But I only have weak confidence in this position.&lt;/p&gt;

&lt;p&gt;As a final observation, notice that in the permanent-benefits model, the probability of extinction appears hardly to matter at all. With no exogenous spending, giving now only beats giving later when x-risk is implausibly high—it must exceed the interest rate, which would require an annual x-risk level of 2% to 5% or so (depending on what you believe the interest rate equals). With high exogenous spending, giving now looks better even when extinction is extremely unlikely. The probability of extinction only matters when the amount of exogenous spending falls in a fairly narrow middle range.&lt;/p&gt;

&lt;p&gt;The models in this essay are not intended to accurately reflect reality. I believe better models could be developed. This essay simply provides some initial approaches to the problem of how x-risk donors should allocate their resources across time. Future work could create new and better models.&lt;/p&gt;

&lt;p&gt;Source code for all calculations done in this essay is available &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/x-risk-now-later.py&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h1 id=&quot;appendix&quot;&gt;Appendix&lt;/h1&gt;
&lt;h2 id=&quot;appendix-a-simple-approach-to-giving-now-vs-later&quot;&gt;Appendix A: Simple approach to giving now vs. later&lt;/h2&gt;

&lt;h3 id=&quot;derivation-of-the-probability-of-extinction-fracp1--x&quot;&gt;Derivation of the probability of extinction \(\frac{p}{1 + x}\)&lt;/h3&gt;

&lt;p&gt;Let \(\delta\) be the probability of extinction. First, the rate of x-risk reduction is proportional to the current level of x-risk. We can mathematically represent this as \(\frac{d \delta}{d y} = -\alpha y\) for some constant \(\alpha\), where \(y\) is work output aimed at reducing x-risk. For simplicity, let’s assume \(\alpha = 1\). The solution to this equation is \(\delta = p e^{-y}\) for any constant \(p\), where \(p\) gives the probability of extinction before any work is done.&lt;/p&gt;

&lt;p&gt;Second, work output is asymptotically logarithmic with spending: \(y = \log(1 + x)\). (We add 1 so that zero spending corresponds to zero work output instead of \(-\infty\).) Plugging this into the previous equation gives \(\delta = p e^{-\log(1 + x)}\), which we can rewrite as \(\delta = \frac{p}{1 + x}\). One unit of spending represents how much it costs to halve x-risk from its initial level, because \(x = 1\) will decrease x-risk from \(p\) to \(\frac{p}{2}\).&lt;/p&gt;

&lt;h3 id=&quot;expected-utility-of-giving-now-or-later&quot;&gt;Expected utility of giving now or later&lt;/h3&gt;

&lt;p&gt;If we do not give any money toward reducing x-risk, the expected utility of the future equals&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\sum\limits_{t=0}^\infty \left(1 - p \right)^t w = \frac{w}{p}
\end{align}
&lt;/div&gt;

&lt;p&gt;where \(w\) is the constant level of well-being experienced by sentient beings during each year.&lt;/p&gt;

&lt;p&gt;For simplicity, let \(w = 1\).&lt;/p&gt;

&lt;p&gt;The formulas for expected utility of giving now or later follow easily from the definitions given. These formulas are:&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\textrm{Utility Now} = \sum\limits_{t=0}^\infty \left(1 - \frac{p }{1 + x} \right)^t = \frac{1 + x}{p}
\end{align}
&lt;/div&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\textrm{Utility Later} = \left[ \sum\limits_{t=0}^{99} (1 - p)^t \right] + (1 - p)^{100} \left( \frac{1 + x (1 + r)^{100}}{p} \right)
\end{align}
&lt;/div&gt;

&lt;p&gt;These formulas make use of the identity \(\sum\limits_{t=0}^\infty (1 - a)^t = \frac{1}{a}\).&lt;/p&gt;

&lt;h3 id=&quot;derivation-of-the-break-even-point-r--p&quot;&gt;Derivation of the break-even point \(r = p\)&lt;/h3&gt;

&lt;p&gt;In the main article, I claimed that the break-even interest rate occurs approximately at \(r = p\).&lt;/p&gt;

&lt;p&gt;\(r = \frac{p}{1 - p}\) is the exact break-even interest rate. When \(p\) is small, \(p \approx \frac{p}{1 - p}\). This works even for unreasonably large values of \(p\): For example, when \(p = 1/10\), \(\frac{p}{1 - p} = 1/9\), and surely the annual probability of extinction is less than 10%. More generally, \(\lvert p - \frac{p}{1 - p} \rvert = \frac{p^2}{1 - p}\), which shrinks slightly faster than quadratically as \(p\) approaches zero.&lt;/p&gt;

&lt;p&gt;Now to prove that \(r = \frac{p}{1 - p}\) gives the exact break-even interest rate. To do this, we will plug in this value of \(r\) into the formula for the utility of giving later, and show that it equals the utility of giving now.&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\textrm{Utility Later} = \left[ \sum\limits_{t=0}^{99} (1 - p)^t \right] + (1 - p)^{100} \left( \frac{1 + x (\frac{1}{1 - p})^{100}}{p} \right)
\end{align}
&lt;/div&gt;

&lt;p&gt;We can rearrange this to&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\left[ \sum\limits_{t=0}^{99} (1 - p)^t \right] + \frac{1}{p} (1 - p)^{100} + \frac{x}{p}
\end{align}
&lt;/div&gt;

&lt;p&gt;Observe that \(\frac{1}{p} = \sum\limits_{t=0}^\infty (1 - p)^t\), and therefore \(\frac{1}{p} (1 - p)^{100} = \sum\limits_{t=100}^\infty (1 - p)^t\). Furthermore, clearly \(\sum\limits_{t=0}^{99} (1 - p)^t + \sum\limits_{t=100}^\infty (1 - p)^t = \sum\limits_{t=0}^\infty (1 - p)^t\). Thus we can rewrite the Utility Later formula as&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\frac{1}{p} + \frac{x}{p} = \frac{1 + x}{p}
\end{align}
&lt;/div&gt;

&lt;p&gt;which exactly equals Utility Now. \(\blacksquare\)&lt;/p&gt;

&lt;p&gt;Observations:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;This proof works no matter when the “give later” time occurs, whether it’s a century from now, a millennium, a year, etc.&lt;/li&gt;
  &lt;li&gt;When \(r &amp;gt; \frac{p}{1 - p}\), you can achieve arbitrarily high expected utility by waiting sufficiently long before spending.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;when-to-spend-all-money-at-once&quot;&gt;When to spend all money at once&lt;/h3&gt;

&lt;p&gt;The model assumes we must spend either now or 100 years from now. Let’s generalize this to say we must spend either now or \(T\) years from now, for some number \(T\). We have a budget \(B\) that we can spend. Let \(x_1\) be the amount we choose to spend now, and \(x_2\) be the amount we spend at time \(T\). We can discount \(x_2\) at the investment rate of return to get that \(x_1 + (1 + r)^{-T} x_2 = B\). What are the optimal values of \(x_1\) and \(x_2\)?&lt;/p&gt;

&lt;p&gt;Total utility is given by&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
U = \sum\limits_{t=0}^{T-1} (1 - \frac{p}{1 + x_1})^t + (1 - \frac{p}{1 + x_1})^T \sum\limits_{t=0}^\infty (1 - \frac{p}{1 + x_1 + x_2})^t
\end{align}
&lt;/div&gt;

&lt;p&gt;The formula for the optimal values of \(x_i\) is complicated, but we can confirm numerically that for most reasonable parameter values, at \(T=100\), we prefer to give all now or all later. (See &lt;a href=&quot;https://github.com/michaeldickens/public-scripts/blob/master/x-risk-now-later.py&quot;&gt;x-risk-now-later.py&lt;/a&gt;, using the function &lt;code&gt;permanent_reduction_continuous&lt;/code&gt;.)&lt;/p&gt;

&lt;p&gt;We can specify the precise formula when \(T = 1\). In this case, utility is maximized by \(x_1 = \sqrt{\displaystyle\frac{p(1 + r)(1 + B)}{r}} - 1\). If \(x_1 \le 0\), that means we should spend all of our money later and none now, and vice versa if \(x_1 \ge B\). If \(0 &amp;lt; x_1 &amp;lt; B\), then we should spend some money now and some later.&lt;/p&gt;

&lt;h2 id=&quot;appendix-b-model-variation-where-well-being-increases-over-time&quot;&gt;Appendix B: Model variation where well-being increases over time&lt;/h2&gt;
&lt;h3 id=&quot;expected-utility-of-giving-now-or-later-1&quot;&gt;Expected utility of giving now or later&lt;/h3&gt;

&lt;p&gt;For any number \(a \ge 0\), \(\sum\limits_{t=0}^\infty (1 - a)^t \cdot t = \frac{1 - a}{a^2}\). We can use this fact to update the &lt;a href=&quot;#expected-utility-of-giving-now-or-later&quot;&gt;formulas&lt;/a&gt; for the expected utility of giving now or later.&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\textrm{Utility Now} = \frac{1 - \frac{p}{1 + x}}{(\frac{p}{1 + x})^2} = \frac{1}{p^2}[x^2 + (2 - p) x + (1 - p)]
\end{align}
&lt;/div&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\textrm{Utility Later} = \sum\limits_{t=0}^{99} (1 - p)^t + (1 - p)^{100} \frac{1}{p^2} [(x (1 + r)^{100})^2 + (2 - p) x (1 + r)^{100} + (1 - p)]
\end{align}
&lt;/div&gt;

&lt;h3 id=&quot;derivation-of-the-break-even-point&quot;&gt;Derivation of the break-even point&lt;/h3&gt;

&lt;p&gt;I will not precisely derive the break-even interest rate in this case. Instead, I will simply show that the break-even point occurs when \(r &amp;lt; \frac{p}{1 - p}\). From this, it follows that the break-even point is lower in the increasing-wellbeing model than in the constant-wellbeing model.&lt;/p&gt;

&lt;p&gt;Plugging in \(r = \frac{p}{1 - p}\) to the formula for Utility Later gives&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\sum\limits_{t=0}^{99} (1 - p)^t + (1 - p)^{100} \frac{1}{p^2} [(x (1 - p)^{-100})^2 + (2 - p) x (1 - p)^{-100} + (1 - p)]
\end{align}
&lt;/div&gt;

&lt;p&gt;Rearranging these terms, and using the same trick we used in &lt;a href=&quot;#derivation-of-the-break-even-point-r--p&quot;&gt;the previous proof&lt;/a&gt; to get rid of the summation term, we get&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\textrm{Utility Later} = \frac{1}{p^2} \left[ \frac{x^2}{(1 - p)^{100}} + (2 - p) x + (1 - p) \right]
\end{align}
&lt;/div&gt;

&lt;p&gt;Comparing this to the Utility Now formula, we see that they look nearly the same except that Utility Later has an \(x^2\) term, and in its place, Utility Later has \(\frac{x^2}{(1 - p)^{100}}\). The latter term is bigger, so giving later provides greater utility than giving now. \(\blacksquare\)&lt;/p&gt;

&lt;h2 id=&quot;appendix-c-beyond-a-century&quot;&gt;Appendix C: Beyond a century&lt;/h2&gt;

&lt;h3 id=&quot;option-2-is-never-best&quot;&gt;Option 2 is never best&lt;/h3&gt;

&lt;p&gt;Let \(\hat{p} = \frac{p}{1 + y}\). As discussed in the main text, option 3 provides greater utility than the other options when \(r &amp;gt; \hat{p}\). However, as discussed &lt;a href=&quot;#derivation-of-the-break-even-point-r--p&quot;&gt;previously in this appendix&lt;/a&gt;, this is an approximation. The true break-even point is \(\frac{\hat{p}}{1 - \hat{p}} = \frac{p}{1 + y - p}\). Let \(r_2 = \frac{p}{1 + y - p}\). If we show that the break-even interest rate \(r_1\) between options 1 and 2 is greater than \(r_2\), that means there are two cases:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The market interest rate is less than \(r_1\), so option 1 wins.&lt;/li&gt;
  &lt;li&gt;The market interest rate is greater than \(r_2\), so option 3 wins.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let \(U_n\) be the total utility of option \(n\). Let \(r\) be the market interest rate. Without loss of generality, suppose that rather than spending either now or in a century, we spend either now or next year (this makes the math simpler.)&lt;/p&gt;

&lt;p&gt;Then:&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
U_1 &amp;amp;= 1 + \left( 1 - \frac{p}{1 + x} \right) \left( \frac{1 + y + x}{p} \right) \\\ U_2 &amp;amp;= 1 + (1 - p) \left( \frac{1 + y + x(1 + r)}{p} \right)
\end{align}
&lt;/div&gt;

&lt;p&gt;The break-even interest rate \(r_1\) occurs when \(U_1 = U_2\). Solving for \(r_1\) gives&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
r_1 = \frac{p ( 1 + y + x)}{(1 - p) (1 + x)}
\end{align}
&lt;/div&gt;

&lt;p&gt;We want to find the conditions under which \(r_1 \ge r_2\):&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\frac{p ( 1 + y + x)}{(1 - p) (1 + x)} \ge \frac{p}{1 + y - p}
\end{align}
&lt;/div&gt;

&lt;p&gt;Simplifying this inequality gives&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
(2 + x + y)y \ge p y
\end{align}
&lt;/div&gt;

&lt;p&gt;When \(y = 0\), this inequality holds because both sides equal \(0\). When \(y &amp;gt; 0\), we can simplify the inequality to \(2 + x + y \ge p\). Recall that \(p\) is a probability and therefore cannot be greater than 1, so this inequality is always true. The inequality holds in both conditions on \(y\). Therefore, \(r_1 \ge r_2\).&lt;/p&gt;

&lt;p&gt;Option 2 can only be optimal when \(r &amp;gt; r_1\) and \(r &amp;lt; r_2\), but this can never happen. Therefore, we should always prefer option 1 or option 3. \(\blacksquare\)&lt;/p&gt;

&lt;h2 id=&quot;appendix-d-model-variation-where-spending-only-temporarily-reduces-x-risk&quot;&gt;Appendix D: Model variation where spending only temporarily reduces x-risk&lt;/h2&gt;
&lt;h3 id=&quot;optimality-of-spreading-spending-over-time&quot;&gt;Optimality of spreading spending over time&lt;/h3&gt;

&lt;p&gt;To show that we should spread out our spending across multiple centuries, it is sufficient to show that utility of spending is concave within each period, holding spending in all other periods fixed.&lt;/p&gt;

&lt;p&gt;To make this proof a little easier, let’s change the assumptions. Instead of x-risk reduction efforts lasting a century, let’s say they only last for a year.&lt;/p&gt;

&lt;p&gt;Suppose you spend your budget at year \(n + 1\). Then total utility equals (using the same variable terminology as in previous appendices):&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
U &amp;amp; = \sum\limits_{t=0}^{n} (1 - p)^t + (1 - p)^{n} \left( 1 - \frac{p}{1 + x} \right) \left( \sum_{t=0}^\infty (1 - p)^t \right)
\end{align}
&lt;/div&gt;

&lt;p&gt;Then, taking the second derivative:&lt;/p&gt;

&lt;div class=&quot;mathbox&quot;&gt;
\begin{align}
\frac{\partial^2 U}{\partial x^2} = \frac{-2}{(1 + x)^3} (1 - p)^n
\end{align}
&lt;/div&gt;

&lt;p&gt;The second derivative is negative for any non-negative value of \(x\). Therefore, utility is concave with respect to spending. \(\blacksquare\)&lt;/p&gt;

&lt;h2 id=&quot;appendix-e-optimal-investment-where-spending-temporarily-reduces-x-risk&quot;&gt;Appendix E: Optimal investment where spending temporarily reduces x-risk&lt;/h2&gt;

&lt;p&gt;Suppose we have some risky investment and want to decide how much &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;leverage&lt;/a&gt; to use. We should pick the amount of leverage that maximizes expected utility.&lt;/p&gt;

&lt;p&gt;For simplicity, assume x-risk reductions only last for a single year rather than a century. (Or we could say that a single period equals a century rather than a year. This simplification changes the answer somewhat, but it’s necessary to make the problem appropriately tractable.) From the utility formula in Appendix D, we can see that utility is a linear function of \(\frac{-p}{1 + x}\). I do not believe the expected utility of this function has a precise closed-form solution, but it does closely resemble an &lt;a href=&quot;https://en.wikipedia.org/wiki/Isoelastic_utility&quot;&gt;isoelastic utility function&lt;/a&gt; with a relative risk aversion coefficient of 2.&lt;/p&gt;

&lt;p&gt;More precisely, we can calculate that the &lt;a href=&quot;https://en.wikipedia.org/wiki/Risk_aversion#Relative_risk_aversion&quot;&gt;relative risk aversion&lt;/a&gt; (RRA) for this utility function equals \(\frac{2 x}{1 + x}\). For small \(x\), RRA is close to 0; as \(x\) increases, it approaches a maximum of 2.&lt;/p&gt;

&lt;p&gt;For an isoelastic utility function with RRA = 2, optimal leverage is given by \(\frac{\mu}{2 \sigma}\), where \(\mu\) and \(\sigma\) are the mean and standard deviation of the logarithm of the interest rate \(r\) (assuming \(r\) follows a log-normal distribution). &lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Merton (1969)&lt;/a&gt;&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; gives the precise conditions for optimality, but we might fairly assume that for large \(x\), \(\frac{\mu}{2 \sigma}\) gives approximately optimal leverage for existential risk-focused philanthropists. For smaller \(x\), optimal leverage is higher, but it’s hard to specify the precise value.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.aacalc.com/docs/relative_risk_aversion&quot;&gt;Ordinary investors tend to have RRAs roughly around 2&lt;/a&gt;. That means under the temporary-benefits model and with a relatively low halving cost, philanthropists should behave like ordinary investors with respect to how they manage risk. With a relatively high halving cost, philanthropists should be willing to take on much more risk.&lt;/p&gt;

&lt;h1 id=&quot;acknowledgments&quot;&gt;Acknowledgments&lt;/h1&gt;

&lt;p&gt;Thanks to Aaron Gertler and Mindy McTeigue for providing feedback on drafts of this essay.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For the most part, I use the term “well-being” to refer to the value of the world at a particular point in time, and “utility” to refer to the total value across time of taking a particular action. Utility is equal to the sum of the discounted well-being at each point in time. I use this terminology because “utility” refers to the thing we value, which isn’t exactly the same as well-being—it’s well-being aggregated over time using a particular method. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We could also talk about &lt;a href=&quot;https://forum.effectivealtruism.org/posts/skPFH8LxGdKQsTkJy/clarifying-existential-risks-and-existential-catastrophes&quot;&gt;existential catastrophes&lt;/a&gt; other than extinction. This essay will focus on extinction because it’s a dramatic binary event that’s easier to reason about. For simplicity, we can say that either society will be good and have some high level of well-being, or everyone dies and the world has zero well-being. We might also be able to talk about other existential catastrophes in the same way by treating catastrophic outcomes as resulting in zero well-being. Even if it’s not actually zero, it’s much smaller than the level of well-being in an ideal world, so we can round it off to zero. Some types of bad outcomes might result in substantially negative well-being, but that’s out of scope of this essay. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I am conflating these two concepts for simplicity. See prior footnote.&lt;sup id=&quot;fnref:19:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ramsey (1928). &lt;a href=&quot;http://piketty.pse.ens.fr/files/Ramsey1928.pdf&quot;&gt;A Mathematical Theory of Saving.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:9:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:9:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;More precisely, it always exactly equals \(\frac{p}{1 - p}\). &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If we do prefer to give in the distant future, when exactly do we give? Do we just keep postponing our giving indefinitely? This problem comes up in many models like the one in this essay. For more, see Rendall (2018), &lt;a href=&quot;https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198813248.001.0001/oso-9780198813248-chapter-10&quot;&gt;Discounting and the Paradox of the Infinitely Postponed Splurge.&lt;/a&gt; &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This follows from the fact that, with an interest rate above \(\frac{p}{1 + y}\), giving later can achieve arbitrarily large expected utility by waiting sufficiently long before giving. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;If the probability of extinction at time \(t\) equals \(\frac{1}{1 + at}\) for some constant spending rate \(a\), then total utility equals \(\sum_0^\infty \frac{1}{1 + at}\). &lt;a href=&quot;https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)&quot;&gt;This series diverges.&lt;/a&gt; &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Limit-discounted utilitarianism &lt;a href=&quot;https://econtheory.org/ojs/index.php/te/article/viewFile/1836/19819/582&quot;&gt;(Jonsson &amp;amp; Voorneveld, 2018)&lt;/a&gt; can compare infinite utility streams &lt;a href=&quot;https://philosophyforprogrammers.blogspot.com/2017/09/big-advance-in-infinite-ethics.html&quot;&gt;generated by a finite Markov process&lt;/a&gt;. A discount rate asymptotically approaching zero cannot arise from a finite Markov process, but it can still be defined with finitely many (and in fact very few) symbols, so there might be something similar to limit-discounted utilitarianism that can compare such utility streams. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some factory farming interventions prevent animals from existing rather than improving their lives, which introduces questions of population ethics. But interventions such as welfare reforms do straightforwardly improve animals’ lives, so we can treat this as analogous to consumption. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Can other causes fit into the standard Ramsey model? Direct cash transfers (via GiveDirectly) fit the model well, because they operate by directly increasing poor people’s consumption. Others, like malaria nets or cage-free campaigns, don’t seem to fit as cleanly, and we might want to model them differently. I will not address any other causes in this essay, but this raises questions for future research. &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Merton (1969). &lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Lifetime Portfolio Selection Under Uncertainty: The Continuous-Time Case.&lt;/a&gt; &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Should We Prioritize Long-Term Existential Risk?</title>
				<pubDate>Wed, 19 Aug 2020 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2020/08/19/long_term_x-risk/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/08/19/long_term_x-risk/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/L4ExK2dN2dkxw6SCw/should-we-prioritize-long-term-existential-risk&quot;&gt;Effective Altruism Forum&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;: We should reduce existential risk in the long term, not merely over the next century. We might best do this by developing &lt;a href=&quot;https://philpapers.org/rec/JOHLIR&quot;&gt;longtermist institutions&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; that will operate to keep existential risk persistently low.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href=&quot;/confidence_tags&quot;&gt;Confidence&lt;/a&gt;: Unlikely.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This essay was inspired by a &lt;a href=&quot;https://oxpr.io/blog/2017/2/11/tom-sittler-assumptions-of-arguments-for-existential-risk-reduction&quot;&gt;blog post&lt;/a&gt; and &lt;a href=&quot;https://fragile-credences.github.io/ltf-paper/&quot;&gt;paper&lt;/a&gt;&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; by Tom Sittler on long-term existential risk.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Civilization could continue to exist for billions (or even trillions) of years. To achieve our full potential, we must avoid existential catastrophe not just this century, but in all centuries to come. Most work on x-risk focuses on near-term risks, and might not do much to help over long time horizons. Longtermist institutional reform could ensure civilization continues to prioritize x-risk reduction well into the future.&lt;/p&gt;

&lt;p&gt;This argument depends on three key assumptions, which I will justify in this essay:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The long-term probability of existential catastrophe matters more than the short-term probability.&lt;/li&gt;
  &lt;li&gt;Most efforts to reduce x-risk will probably only have an effect on the short term.&lt;/li&gt;
  &lt;li&gt;Longtermist institutional reform has a better chance of permanently reducing x-risk.&lt;/li&gt;
&lt;/ol&gt;

&lt;!-- more --&gt;

&lt;p&gt;For the sake of keeping this essay short, I will gloss over a lot of complexity and potential caveats. Suffice it to say that this essay’s thesis depends on a lot of assumptions, and I’m not convinced that they’re all true. This essay is intended more as a conversation-starter than a rigorous analysis.&lt;/p&gt;

&lt;h2 id=&quot;long-term-x-risk-matters-more-than-short-term-risk&quot;&gt;Long-term x-risk matters more than short-term risk&lt;/h2&gt;

&lt;p&gt;People often argue that we urgently need to prioritize reducing existential risk because we live in an unusually dangerous time. If existential risk decreases over time, one might intuitively expect that efforts to reduce x-risk will matter less later on. But in fact, the lower the risk of existential catastrophe, the &lt;em&gt;more&lt;/em&gt; valuable it is to further reduce that risk.&lt;/p&gt;

&lt;p&gt;Think of it like this: if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four centuries. Halving risk again doubles the expected length to eight centuries. In general, halving x-risk becomes more valuable when x-risk is lower.&lt;/p&gt;

&lt;p&gt;Perhaps we expect x-risk to substantially decline in future centuries. In that case, given the choice between reducing x-risk this century and reducing it in the future, we prefer to reduce it in the future.&lt;/p&gt;

&lt;p&gt;This argument depends on certain specific claims about how x-risk reduction works. But the basic result (that we care more about x-risk in the long term than in the short term) holds up across a variety of assumptions. See &lt;a href=&quot;https://fragile-credences.github.io/ltf-paper/longtermfuture.pdf&quot;&gt;Sittler (2018)&lt;/a&gt; section 3 for a more rigorous justification and an explanation of under which precise conditions this result holds.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h2 id=&quot;current-x-risk-reduction-efforts-might-only-work-in-the-short-term&quot;&gt;Current x-risk reduction efforts might only work in the short term&lt;/h2&gt;

&lt;p&gt;If we look at current efforts to reduce the probability of existential catastrophe, it seems like most of them will only have relatively short-term effects. For example, nuclear disarmament treaties probably reduce x-risk. But treaties don’t last forever. We should expect disarmament treaties to break down over time. Most efforts to reduce x-risk seem like this: they will reduce risk temporarily, but their effects will diminish over the next few decades. (AI safety might be an exception to this if a friendly AI can be expected to minimize all-cause existential risk.)&lt;/p&gt;

&lt;p&gt;It seems likely to me that most x-risk reduction efforts will only work temporarily. That said, this belief is more intuitive than empirical, and I do not have a strong justification for it (and I’d only put maybe 60% confidence in this belief). Other people might reasonably disagree.&lt;/p&gt;

&lt;h2 id=&quot;longtermist-institutional-reform-could-permanently-reduce-x-risk&quot;&gt;Longtermist institutional reform could permanently reduce x-risk&lt;/h2&gt;

&lt;p&gt;John &amp;amp; MacAskill’s recent paper &lt;a href=&quot;https://philpapers.org/rec/JOHLIR&quot;&gt;Longtermist Institutional Reform&lt;/a&gt; proposes developing institutions with incentives that will ensure the welfare of future generations. The notion of long-term vs. short-term existential risk appears to provide a compelling argument for prioritizing longtermist institutional reform over x-risk reduction.&lt;/p&gt;

&lt;p&gt;The specific institutional changes proposed by John &amp;amp; MacAskill might not necessarily help reduce long-term x-risk. But the general strategy of longtermist institutional reform looks promising. If we can develop stable and rational longtermist institutions, those institutions will put effort into reducing existential risk, and will continue doing so into the long-term future. This seems like one of the most compelling ways for us to reduce long-term x-risk. And as discussed in the previous sections, this probably matters more than reducing x-risk in the short term.&lt;/p&gt;

&lt;h2 id=&quot;counter-arguments&quot;&gt;Counter-arguments&lt;/h2&gt;

&lt;p&gt;I have argued that we might want to prioritize longtermist institutional reform over short-term existential risk reduction. This result might not hold up if:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;In future centuries, civilization will reduce x-risk to such a low rate that it will become too difficult to reduce any further.&lt;/li&gt;
  &lt;li&gt;Short-term x-risk reduction efforts can permanently reduce risk, more so than longtermist institutional reform would.&lt;/li&gt;
  &lt;li&gt;Longtermist institutional reform is too intractable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Maybe some other intervention would do a better job of mitigating long-term x-risk—for example, &lt;a href=&quot;https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors&quot;&gt;reducing risks from malevolent actors&lt;/a&gt;. Or we could work on &lt;a href=&quot;https://forum.effectivealtruism.org/posts/kCkd9Mia2EmbZ3A9c/deliberation-may-improve-decision-making&quot;&gt;improving decision-making in general&lt;/a&gt;. Or we might simply prefer to invest our money to be spent by future generations.&lt;/p&gt;

&lt;h2 id=&quot;subjective-probability-estimates&quot;&gt;Subjective probability estimates&lt;/h2&gt;

&lt;p&gt;The EA community generally underrates the significance of long-term x-risk reduction: 3 in 4&lt;/p&gt;

&lt;p&gt;Marginal work on (explicit) long-term x-risk reduction is more cost-effective than marginal work on short-term x-risk reduction: 1 in 3&lt;/p&gt;

&lt;p&gt;Longtermist institutional reform is the best way to explicitly reduce long-term x-risk: 1 in 3&lt;/p&gt;

&lt;h2 id=&quot;acknowledgments&quot;&gt;Acknowledgments&lt;/h2&gt;

&lt;p&gt;Thanks to Sofia Fogel, Ozzie Gooen, and Kieran Greig for providing feedback on this essay.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;John, T. &amp;amp; MacAskill, W. (2020, forthcoming). Longtermist Institutional Reform. In Natalie Cargill (ed.), &lt;em&gt;The Long View&lt;/em&gt;. London, UK: FIRST. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Sittler, T. (2018). The expected value of the long-term future. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Using Sittler’s model in section 3.1.1, “Diminishing returns on risk reduction”, under the assumptions that (1) one can only reduce x-risk for the current century and (2) efforts can reduce x-risk by an amount proportional to the current risk, it follows that x-risk reduction efforts are equally valuable regardless of the level of x-risk. Therefore, it’s better to reduce x-risk in future centuries than now, because you can invest your money at a positive rate of return and thus spend more on x-risk reduction in the future than you can now. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>The Importance of Unknown Existential Risks</title>
				<pubDate>Thu, 23 Jul 2020 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2020/07/23/unknown_x-risks/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/07/23/unknown_x-risks/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; The most dangerous existential risks appear to be the ones that we only became aware of recently. As technology advances, new existential risks appear. Extrapolating this trend, there might exist even worse risks that we haven’t discovered yet.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;&lt;em&gt;Epistemic status: From the inside view, I find the core argument compelling, but it involves sufficiently complicated considerations that I’m not more than 50% confident in its correctness. In this essay, if I claim that something is true, what I really mean is that it’s true from the perspective of a particular argument, not necessarily that I believe it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/CRofnyTEqL4uSNBSi/the-importance-of-unknown-existential-risks&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#unknown-existential-risks&quot; id=&quot;markdown-toc-unknown-existential-risks&quot;&gt;Unknown existential risks&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#types-of-unknown-risk&quot; id=&quot;markdown-toc-types-of-unknown-risk&quot;&gt;Types of unknown risk&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#why-unknown-risks-might-not-matter-so-much&quot; id=&quot;markdown-toc-why-unknown-risks-might-not-matter-so-much&quot;&gt;Why unknown risks might not matter so much&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-1-ai-matters-most&quot; id=&quot;markdown-toc-argument-1-ai-matters-most&quot;&gt;Argument 1: AI matters most&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-2-there-are-no-unknown-risks&quot; id=&quot;markdown-toc-argument-2-there-are-no-unknown-risks&quot;&gt;Argument 2: There are no unknown risks&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-3-new-risks-only-look-riskier-due-to-bad-priors&quot; id=&quot;markdown-toc-argument-3-new-risks-only-look-riskier-due-to-bad-priors&quot;&gt;Argument 3: New risks only look riskier due to bad priors&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#argument-4-existential-risk-will-decrease-in-the-future-due-to-deliberate-efforts&quot; id=&quot;markdown-toc-argument-4-existential-risk-will-decrease-in-the-future-due-to-deliberate-efforts&quot;&gt;Argument 4: Existential risk will decrease in the future due to deliberate efforts&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#do-we-underestimate-unknown-x-risks&quot; id=&quot;markdown-toc-do-we-underestimate-unknown-x-risks&quot;&gt;Do we underestimate unknown x-risks?&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#implications&quot; id=&quot;markdown-toc-implications&quot;&gt;Implications&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;unknown-existential-risks&quot;&gt;Unknown existential risks&lt;/h2&gt;

&lt;p&gt;Humanity has existed for hundreds of thousands of years, and civilization for about ten thousand. During this time, as far as we can tell, humanity faced a relatively low probability of existential catastrophe (far less than a 1% chance per century). But more recently, while technological growth has offered great benefits, it has also introduced a new class of risks to the future of civilization, such as the possibilities of nuclear war and catastrophic climate change. And some existential risks have been conceived of but aren’t yet possible, such as self-replicating nanotechnology and superintelligent AI.&lt;/p&gt;

&lt;p&gt;In his book &lt;a href=&quot;https://theprecipice.com/&quot;&gt;&lt;em&gt;The Precipice&lt;/em&gt;&lt;/a&gt;, Toby Ord classifies existential risks into three categories: natural, anthropogenic, and future. He provides the following list of risks:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Natural Risks
    &lt;ul&gt;
      &lt;li&gt;Asteriods &amp;amp; Comets&lt;/li&gt;
      &lt;li&gt;Supervolcanic Eruptions&lt;/li&gt;
      &lt;li&gt;Stellar Explosions&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Anthropogenic Risks
    &lt;ul&gt;
      &lt;li&gt;Nuclear Weapons&lt;/li&gt;
      &lt;li&gt;Climate Change&lt;/li&gt;
      &lt;li&gt;Environmental Damage&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Future Risks
    &lt;ul&gt;
      &lt;li&gt;Pandemics&lt;/li&gt;
      &lt;li&gt;Unaligned Artificial Intelligence&lt;/li&gt;
      &lt;li&gt;Dystopian Scenarios&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, he provides his subjective probability estimates that each type of event will occur and result in an existential catastrophe within the next century:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Existential catastrophe&lt;/th&gt;
      &lt;th&gt;Chance within next 100 years&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Asteroid or comet impact&lt;/td&gt;
      &lt;td&gt;∼ 1 in 1,000,000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Supervolcanic eruption&lt;/td&gt;
      &lt;td&gt;∼ 1 in 10,000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Stellar explosion&lt;/td&gt;
      &lt;td&gt;∼ 1 in 1,000,000,000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Nuclear war&lt;/td&gt;
      &lt;td&gt;∼ 1 in 1,000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Climate change&lt;/td&gt;
      &lt;td&gt;∼ 1 in 1,000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Other environmental damage&lt;/td&gt;
      &lt;td&gt;∼ 1 in 1,000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;“Naturally” arising pandemics&lt;/td&gt;
      &lt;td&gt;∼ 1 in 10,000&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Engineered pandemics&lt;/td&gt;
      &lt;td&gt;∼ 1 in 30&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Unaligned artificial intelligence&lt;/td&gt;
      &lt;td&gt;∼ 1 in 10&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Unforeseen anthropogenic risks&lt;/td&gt;
      &lt;td&gt;∼ 1 in 30&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Other anthropogenic risks&lt;/td&gt;
      &lt;td&gt;∼ 1 in 50&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Obviously, these estimates depend on complicated assumptions and people can reasonably disagree about the numbers, but I believe we can agree that they are at least qualitatively correct (e.g., asteroid/comet impacts pose relatively low existential risk, and engineered pandemics look relatively dangerous).&lt;/p&gt;

&lt;p&gt;An interesting pattern emerges: the naturally-caused existential catastrophes have the lowest probability, anthropogenic causes appear riskier, and future causes look riskier still. We can also see that the more recently-discovered risks tend to pose a greater threat:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Imagine if the scientific establishment of 1930 had been asked to compile a list of the existential risks humanity would face over the following hundred years. They would have missed most of the risks covered in this book—especially the anthropogenic risks. [Footnote:] Nuclear weapons would not have made the list, as fission was only discovered in 1938. Nor would engineered pandemics, as genetic engineering was first demonstrated in the 1960s. The computer hadn’t yet been invented, and it wasn’t until the 1950s that the idea of artificial intelligence, and its associated risks, received serious discussion from scientists. The possibility of anthropogenic global warming can be traced back to 1896, but the hypothesis only began to receive support in the 1960s, and was only widely recognized as a risk in the 1980s.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Natural risks that have been present for all of civilization’s history do not pose much threat.&lt;/li&gt;
  &lt;li&gt;Risks that only emerged in the 20th century appear more likely.&lt;/li&gt;
  &lt;li&gt;The likeliest risks are those that cannot occur with present-day technology, but might occur within the next century.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As technology improves, the probability of an existential catastrophe increases. If we extrapolate this trend, we can expect to discover even more dangerous risks that as-yet-unknown future technologies will enable. As Ord wrote, 100 years ago, the scientific community had not conceived of most of the risks that we would now consider the most significant. Perhaps in 100 years’ time, technological advances will enable much more significant risks that we cannot think of today.&lt;/p&gt;

&lt;p&gt;Or perhaps there exist existential risks that are possible today, but that we haven’t yet considered. We developed nuclear weapons in 1945, but it was not until almost 40 years later that we realized their use could lead to a nuclear winter.&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; We might already have the power to cause an existential catastrophe via some mechanism not on Toby Ord’s list; and that mechanism might be easier to trigger, or more likely to occur, than any of the ones we know about.&lt;/p&gt;

&lt;p&gt;If we accept this line of reasoning, then looking only at known risks might lead us to substantially underestimate the probability of an existential catastrophe.&lt;/p&gt;

&lt;p&gt;Even more worryingly, existential risk might continue increasing indefinitely until an existential catastrophe occurs. If technological growth enables greater risk, and technology continues improving, existential risk will continue increasing as well.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; Improved technology can also help us &lt;em&gt;reduce&lt;/em&gt; risk, and we can hope that the development of beneficial technologies will outpace that of harmful ones. But a naive extrapolation from history does not present an optimistic outlook.&lt;/p&gt;

&lt;h2 id=&quot;types-of-unknown-risk&quot;&gt;Types of unknown risk&lt;/h2&gt;

&lt;p&gt;We can make a distinction between two types of unknown risk:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Currently-possible risks that we haven’t thought of&lt;/li&gt;
  &lt;li&gt;Not-yet-possible risks that will become possible with future technology&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The existence of the first type of risk leads us to conclude that we face a higher probability of imminent existential catastrophe than we might otherwise think. The second type doesn’t affect our beliefs about existential risk in the near term, but it does suggest that we should be more concerned about x-risks over the next century or longer.&lt;/p&gt;

&lt;p&gt;We shouldn’t necessarily respond to these two types of unknown risks in the same way. For example: To deal with currently-possible unknown risks, we could spend more effort thinking about possible sources of risk, but this strategy probably wouldn’t help us predict x-risks that depend on future technology.&lt;/p&gt;

&lt;h2 id=&quot;why-unknown-risks-might-not-matter-so-much&quot;&gt;Why unknown risks might not matter so much&lt;/h2&gt;

&lt;p&gt;In this section, I will present a few arguments that unknown risks don’t matter as much as the previous reasoning might suggest (in no particular order). Of these new arguments, the only one I find compelling is the “AI matters most” argument, although this one involves sufficiently complex considerations that I do not feel confident about it.&lt;/p&gt;

&lt;h3 id=&quot;argument-1-ai-matters-most&quot;&gt;Argument 1: AI matters most&lt;/h3&gt;

&lt;p&gt;We have some reason to expect superintelligent AI in particular to pose a greater risk than any unknown future technology. If we do develop superintelligent AI, humans will no longer be the most intelligent creatures on the planet. Intelligence has been the driving factor allowing humans to achieve dominance over the world’s resources, so we can reasonably expect that a sufficiently intelligent AI would be able to gain control over humanity. If we no longer control our destiny, then on priors, we should not expect a particularly high probability that we realize a positive future&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Arguably, unknown risks cannot pose the same level of threat because they will not change who controls the future.&lt;/p&gt;

&lt;p&gt;(On the other hand, there could conceivably exist some unknown but even more important consideration than who controls the future, and that if we thought of this consideration, we would realize that it matters more than superintelligent AI.)&lt;/p&gt;

&lt;p&gt;A sufficiently powerful friendly AI might be able to fix all anthropogenic existential risks and maybe even some natural ones, reducing the probability of existential catastrophe to near zero—thus rendering unknown risks irrelevant. On the other hand, perhaps future technology will introduce some existential risks that not even a superintelligent AI can foresee or mitigate.&lt;/p&gt;

&lt;h3 id=&quot;argument-2-there-are-no-unknown-risks&quot;&gt;Argument 2: There are no unknown risks&lt;/h3&gt;

&lt;p&gt;Perhaps one could argue that we have already discovered the most important risks, and that our uncertainty only lies in how exactly those risks could lead to existential catastrophe. (For instance, severe climate change could result in unexpected outcomes that hurt civilization much more than we anticipated.) On the outside view, I tend to disagree with this argument, based on the fact that we have continued to discover new existential risks throughout the past century. But maybe upon further investigation, this argument would seem more compelling. Perhaps one could create a comprehensive taxonomy of plausible existential risks and show that the known risks fully cover the taxonomy&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h3 id=&quot;argument-3-new-risks-only-look-riskier-due-to-bad-priors&quot;&gt;Argument 3: New risks only look riskier due to bad priors&lt;/h3&gt;

&lt;p&gt;In general, the more recently we discovered a particular existential risk, the more probable it appears. I proposed that this pattern occurs because technological growth introduces increasingly-significant risks. But we have an alternative explanation: Perhaps all existential risks are unlikely, but the recently-discovered risks appear more likely due to bad priors plus wide error bars in our probability estimates.&lt;/p&gt;

&lt;p&gt;Toby Ord alludes to this argument in &lt;a href=&quot;https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/#distinction-between-natural-and-anthropogenic-risks-004542&quot;&gt;his 80,000 Hours interview:&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Perhaps you were thinking about asteroids and how big they were and how violent the impacts would be and think, “Well, that seems a very plausible way we could go extinct”. But then, once you update on the fact that we’ve lasted 2000 centuries without being hit by an asteroid or anything like that, that lowers the probability of those, whereas we don’t have a similar way of lowering the probabilities from other things, from just what they might appear to be at at first glance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If we knew much less about the history of asteroid impacts, we might assign a much higher probability to an existential catastrophe due to asteroid impact. More generally, it’s possible that we systematically assign way too high a prior probability to existential risks that aren’t well-understood, and then end up revising these probabilities downward as we learn more.&lt;/p&gt;

&lt;p&gt;If true, this line of reasoning means not only should we not worry about unknown risks, but we probably shouldn’t even worry about known risks, because we systematically overestimate their likelihood.&lt;/p&gt;

&lt;p&gt;Will MacAskill says something similar in &lt;a href=&quot;https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1&quot;&gt;“Are we living at the most influential time in history?”&lt;/a&gt;, where he argues that we should assume a low base rate of existential catastrophe, and that we don’t have good enough reason to expect the present to look substantially different than history.&lt;/p&gt;

&lt;p&gt;I don’t find this argument particularly compelling. Civilization today has unprecedented technology and globalization, so I do not believe most of human history serves as a useful reference class. Additionally, we have good reason to believe that certain types of existential catastrophes have a reasonably high probability of occurring. For example, we know that the United States and the Soviet Union have &lt;em&gt;almost&lt;/em&gt; started a nuclear war more than once&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;. And while a nuclear war doesn’t necessarily entail an existential catastrophe, it definitely makes such a catastrophe dramatically more likely.&lt;/p&gt;

&lt;p&gt;(Strictly speaking, MacAskill does not necessarily claim that the current rate of existential risk is low, only that we do not live in a particularly influential time. His argument is consistent with the claim that existential risk increases over time, and will continue to increase in the future.)&lt;/p&gt;

&lt;h3 id=&quot;argument-4-existential-risk-will-decrease-in-the-future-due-to-deliberate-efforts&quot;&gt;Argument 4: Existential risk will decrease in the future due to deliberate efforts&lt;/h3&gt;

&lt;p&gt;If the importance of x-risk becomes more widely recognized and civilization devotes much more effort to it in the future, this would probably reduce risk across all causes. If we expect this to happen within the next 50-100 years, that would suggest we currently live in the most dangerous period. In previous centuries, we faced lower existential risk; in future centuries, we will take better efforts to reduce x-risk. And if we believe that unknown risks primarily come from future technologies, then by the time those technologies emerge, we will have stronger x-risk protection measures in place. (Toby Ord claims in his 80,000 Hours interview that this is the main reason why he didn’t assign a higher probability to unknown risks.)&lt;/p&gt;

&lt;p&gt;This argument seems reasonable, but it’s not necessarily relevant to cause prioritization. If we expect that deliberate efforts to reduce x-risk will likely come into play before new and currently-unknown x-risks emerge, that doesn’t mean we should deprioritize unknown risks. Our actions today to prioritize unknown risks could be the very reason that such risks will not seriously threaten future civilization.&lt;/p&gt;

&lt;h2 id=&quot;do-we-underestimate-unknown-x-risks&quot;&gt;Do we underestimate unknown x-risks?&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;Toby Ord estimates that unforeseen risks have a 1 in 30 chance of causing an existential catastrophe in the next century. He gives the same probability to engineered pandemics, a higher probability to AI, and a lower probability to anything else.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://www.researchgate.net/publication/291086909_12_Risks_that_threaten_human_civilisation_The_case_for_a_new_risk_category&quot;&gt;Pamlin &amp;amp; Armstrong (2015)&lt;/a&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; (p. 166) estimate a 0.1% chance of existential catastrophe due to “unknown consequences” in the next 100 years. They give unknown consequences an order of magnitude higher probability than any other risk, with the possible exception of AI&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf&quot;&gt;Rowe &amp;amp; Beard (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; provide a survey of existential risk estimates, and they only find one source (Pamlin &amp;amp; Armstrong) that considers unknown risks (Ord’s book had not been published yet).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Based on these estimates, Pamlin &amp;amp; Armstrong appear to basically agree with the argument in this essay that unknown risks pose a greater threat than all known risks except possibly AI (although I believe they substantially underestimate the absolute probability of existential catastrophe). Ord appears to agree with the weaker claim that unknown risks matter a lot, but not that they matter more than all known risks. But based on Rowe &amp;amp; Beard’s survey (as well as &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit&quot;&gt;Michael Aird’s database of existential risk estimates&lt;/a&gt;), no other sources appear to have addressed the likelihood of unknown x-risks, which implies that most others do not give unknown risks serious consideration. And there exists almost no published research on the issue.&lt;/p&gt;

&lt;h2 id=&quot;implications&quot;&gt;Implications&lt;/h2&gt;

&lt;p&gt;If unknown risks do pose a greater threat than any known risk, this might substantially alter how we should allocate resources on mitigating existential risk, although it’s not immediately clear what we should change. The most straightforward implication is that we should expend relatively more effort on improving general civilizational robustness, and less on mitigating particular known risks. But this might not matter much because (1) mitigating known risks appears more tractable and (2) the world already severely neglects known x-risks.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://globalchallenges.org/&quot;&gt;The Global Challenges Foundation&lt;/a&gt; has produced some content on unknown risks.&lt;sup id=&quot;fnref:4:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; Pamlin &amp;amp; Armstrong&lt;sup id=&quot;fnref:4:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; (p. 129) offer a few high-level ideas about how to mitigate unknown risks:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;ol&gt;
    &lt;li&gt;
      &lt;p&gt;Smart sensors and surveillance could detect many uncertain risks in the early stages, and allow researchers to grasp what is going on.&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
      &lt;p&gt;Proper risk assessment in domains where uncertain risks are possible could cut down on the risk considerably.&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
      &lt;p&gt;Global coordination would aid risk assessment and mitigation.&lt;/p&gt;
    &lt;/li&gt;
    &lt;li&gt;
      &lt;p&gt;Specific research into uncertain and unknown risks would increase our understanding of the risks involved.&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;On the subject of mitigating unknown risks, Toby Ord writes: “While we cannot directly work on them, they may still be lowered through our broader efforts to create a world that takes its future seriously” (p. 162).&lt;/p&gt;

&lt;p&gt;In &lt;a href=&quot;https://nickbostrom.com/papers/vulnerable.pdf&quot;&gt;The Vulnerable World Hypothesis&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;, Nick Bostrom addresses the possibility that “[s]cientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization.” The paper includes some discussion of policy implications.&lt;/p&gt;

&lt;p&gt;If we’re concerned about unknown future risks, we could hold our money in a long-term existential risk fund that will invest the money to be used in future decades or centuries, and deploy it only when the probability of existential catastrophe is deemed sufficiently high.&lt;/p&gt;

&lt;p&gt;But note that a higher probability of existential catastrophe doesn’t necessarily mean we should expend more effort on reducing the probability. &lt;a href=&quot;https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12318&quot;&gt;Yew-Kwang Ng (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; shows that, under certain assumptions, a higher probability of existential catastrophe &lt;em&gt;decreases&lt;/em&gt; our willingness to work on reducing it. Much more could be said about this&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;; I only bring it up as a perspective worth considering.&lt;/p&gt;

&lt;p&gt;In general, unknown risks look important and under-researched, but they do not offer any clear prescriptions on how to mitigate them. More work is required to better evaluate the probability of an existential catastrophe due to unknown risks, and to figure out what we can do about it.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ord, Toby. The Precipice (p. 162 and footnote 137 (p. 470)). Hachette Books. Kindle Edition. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;&lt;a href=&quot;https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/#climate-change-005108&quot;&gt;Toby Ord on the precipice and humanity’s potential futures.&lt;/a&gt; 80,000 Hours Podcast. Relevant quote:&lt;/p&gt;

      &lt;blockquote&gt;
        &lt;p&gt;[I]n the case of nuclear war, for example, nuclear winter hadn’t been thought of until 1982 and 83 and so that’s a case where we had nuclear weapons from 1945 and there was a lot of conversation about how they could cause the end of the world perhaps, but they hadn’t stumbled upon a mechanism that actually really was one that really could pose a threat. But I don’t think it was misguided to think that perhaps it could cause the end of humanity at those early times, even when they hadn’t stumbled across the correct mechanism yet.&lt;/p&gt;
      &lt;/blockquote&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technology &lt;a href=&quot;https://www.openphilanthropy.org/blog/modeling-human-trajectory&quot;&gt;appears to be growing hyperbolically&lt;/a&gt;. If existential risk is roughly proportional to technology, that means we could rapidly approach a 100% probability of existential catastrophe as technological growth accelerates. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I have &lt;a href=&quot;https://mdickens.me/2015/08/15/is_preventing_human_extinction_good/&quot;&gt;substantial doubts&lt;/a&gt; as to whether humanity will achieve a positive future if left to our own devices, but that’s out of scope for this essay. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I tried to do this, and gave up after I came up with about a dozen categories of existential risk on the level of detail of “extinction via molecules that bind to the molecules in cellular machinery, rendering them useless” or “extinction via global loss of motivation to reproduce”. Clearly we can find many risk categories at this level of detail, but also this gives far too little detail to actually explain the sources of risk or assess their probabilities. A proper taxonomy would require much greater detail, and would probably be intractably large. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I find it plausible that the &lt;a href=&quot;https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident&quot;&gt;Petrov incident&lt;/a&gt; had something like an 80% &lt;em&gt;ex ante&lt;/em&gt; probability of leading to nuclear war. The &lt;a href=&quot;https://en.wikipedia.org/wiki/Anthropic_principle&quot;&gt;anthropic principle&lt;/a&gt; means we can’t necessarily treat the non-occurrence of an extinction event as evidence that the probability is low. (I say “can’t necessarily” rather than “can’t” because it is not clear that the anthropic principle is correct.) &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Pamlin, Dennis &amp;amp; Armstrong, Stuart. (2015). &lt;a href=&quot;https://www.researchgate.net/publication/291086909_12_Risks_that_threaten_human_civilisation_The_case_for_a_new_risk_category&quot;&gt;12 Risks that threaten human civilisation: The case for a new risk category.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:4:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:4:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Pamlin &amp;amp; Armstrong do not provide a single point of reference for all of their probability estimates, so I have produced one here for readers’ convenience. When providing a probability estimate, they give a point estimate, except for AI, where they provide a range, because “Artificial Intelligence is the global risk where least is known.” For three existential risks, they decline to provide an estimate.&lt;/p&gt;

      &lt;table&gt;
        &lt;tbody&gt;
          &lt;tr&gt;
            &lt;td&gt;Extreme climate change&lt;/td&gt;
            &lt;td&gt;0.005%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Nuclear war&lt;/td&gt;
            &lt;td&gt;0.005%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Global pandemic&lt;/td&gt;
            &lt;td&gt;0.0001%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Ecological catastrophe&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Global system collapse&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Asteroid impact&lt;/td&gt;
            &lt;td&gt;0.00013%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Super-volcano&lt;/td&gt;
            &lt;td&gt;0.00003%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Synthetic biology&lt;/td&gt;
            &lt;td&gt;0.01%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Nanotechnology&lt;/td&gt;
            &lt;td&gt;0.01%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;AI&lt;/td&gt;
            &lt;td&gt;0-10%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Unknown consequences&lt;/td&gt;
            &lt;td&gt;0.1%&lt;/td&gt;
          &lt;/tr&gt;
          &lt;tr&gt;
            &lt;td&gt;Bad global governance&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
          &lt;/tr&gt;
        &lt;/tbody&gt;
      &lt;/table&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Rowe, Thomas and Beard, Simon (2018) Probabilities, methodologies and the evidence base in existential risk assessments. Working paper, Centre for the Study of Existential Risk, Cambridge, UK. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Tzezana, Roey. &lt;a href=&quot;https://globalchallenges.org/global-risks/unknown-risks/&quot;&gt;Unknown risks.&lt;/a&gt; &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Bostrom, N. (2019), The Vulnerable World Hypothesis. Glob Policy, 10: 455-476. doi:10.1111/1758-5899.12718 &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ng, Y.‐K. (2016), The Importance of Global Extinction in Climate Change Policy. Glob Policy, 7: 315-322. doi:10.1111/1758-5899.12318 &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;As one counter-point: In Ng’s model, a fixed investment reduces the probability of extinction by a fixed proportion (that is, it reduces the probability from &lt;code&gt;P&lt;/code&gt; to &lt;code&gt;aP&lt;/code&gt; for some constant &lt;code&gt;a &amp;lt; 1&lt;/code&gt;). But I find it likely that we can reduce existential risk more rapidly than that when it’s higher.&lt;/p&gt;

      &lt;p&gt;As a second counter-point, Ng’s model assumes that a one-time investment can permanently reduce existential risk, which also seems questionable to me (although not obviously wrong). &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Estimating the Philanthropic Discount Rate</title>
				<pubDate>Fri, 03 Jul 2020 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2020/07/03/estimating_discount_rate/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/07/03/estimating_discount_rate/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/3QhcSxHTz2F7xxXdY/estimating-the-philanthropic-discount-rate&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;How we should spend our philanthropic resources over time depends on how much we discount the future. A higher discount rate means we should spend more now; a lower discount rate tells us to spend less now and more later.&lt;/li&gt;
  &lt;li&gt;We (probably) should not assign less moral value to future beings, but we should still discount the future based on the possibility of extinction, expropriation, value drift, or changes in philanthropic opportunities.&lt;/li&gt;
  &lt;li&gt;According to the &lt;a href=&quot;https://plato.stanford.edu/entries/ramsey-economics/&quot;&gt;Ramsey model&lt;/a&gt;, if we estimate the discount rate based on those four factors, that tells us how quickly we should consume our resources&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
  &lt;li&gt;We can decrease the discount rate, most notably by reducing existential risk and guarding against value drift. We still have a lot to learn about the best ways to do this.&lt;/li&gt;
  &lt;li&gt;According to a simple model, improving our estimate of the discount rate might be the top effective altruist priority.&lt;/li&gt;
&lt;/ul&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#significance-of-a-declining-long-run-discount-rate&quot; id=&quot;markdown-toc-significance-of-a-declining-long-run-discount-rate&quot;&gt;Significance of a declining long-run discount rate&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#breaking-down-the-current-discount-rate&quot; id=&quot;markdown-toc-breaking-down-the-current-discount-rate&quot;&gt;Breaking down the current discount rate&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#economic-nullification&quot; id=&quot;markdown-toc-economic-nullification&quot;&gt;Economic nullification&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#extinction&quot; id=&quot;markdown-toc-extinction&quot;&gt;Extinction&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#superintelligent-ai&quot; id=&quot;markdown-toc-superintelligent-ai&quot;&gt;Superintelligent AI&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#economic-collapse&quot; id=&quot;markdown-toc-economic-collapse&quot;&gt;Economic collapse&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#expropriation-and-value-drift&quot; id=&quot;markdown-toc-expropriation-and-value-drift&quot;&gt;Expropriation and value drift&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#expropriation&quot; id=&quot;markdown-toc-expropriation&quot;&gt;Expropriation&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#value-drift&quot; id=&quot;markdown-toc-value-drift&quot;&gt;Value drift&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#changes-in-opportunities&quot; id=&quot;markdown-toc-changes-in-opportunities&quot;&gt;Changes in opportunities&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#combined-estimate&quot; id=&quot;markdown-toc-combined-estimate&quot;&gt;Combined estimate&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#breaking-down-the-long-run-discount-rate&quot; id=&quot;markdown-toc-breaking-down-the-long-run-discount-rate&quot;&gt;Breaking down the long-run discount rate&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#economic-nullification-1&quot; id=&quot;markdown-toc-economic-nullification-1&quot;&gt;Economic nullification&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#extinction-1&quot; id=&quot;markdown-toc-extinction-1&quot;&gt;Extinction&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#superintelligent-ai-1&quot; id=&quot;markdown-toc-superintelligent-ai-1&quot;&gt;Superintelligent AI&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#economic-collapse-1&quot; id=&quot;markdown-toc-economic-collapse-1&quot;&gt;Economic collapse&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#expropriation-and-value-drift-1&quot; id=&quot;markdown-toc-expropriation-and-value-drift-1&quot;&gt;Expropriation and value drift&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#changes-in-opportunities-1&quot; id=&quot;markdown-toc-changes-in-opportunities-1&quot;&gt;Changes in opportunities&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#combined-estimate-1&quot; id=&quot;markdown-toc-combined-estimate-1&quot;&gt;Combined estimate&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#can-we-change-the-discount-rate&quot; id=&quot;markdown-toc-can-we-change-the-discount-rate&quot;&gt;Can we change the discount rate?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#the-significance-of-reducing-value-drift&quot; id=&quot;markdown-toc-the-significance-of-reducing-value-drift&quot;&gt;The significance of reducing value drift&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#reducing-risk-by-creating-multiple-funds&quot; id=&quot;markdown-toc-reducing-risk-by-creating-multiple-funds&quot;&gt;Reducing risk by creating multiple funds&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#what-about-individual-value-drift&quot; id=&quot;markdown-toc-what-about-individual-value-drift&quot;&gt;What about individual value drift?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#significance-of-mis-estimating-the-discount-rate&quot; id=&quot;markdown-toc-significance-of-mis-estimating-the-discount-rate&quot;&gt;Significance of mis-estimating the discount rate&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#extended-ramsey-model-with-estimated-discount-rate&quot; id=&quot;markdown-toc-extended-ramsey-model-with-estimated-discount-rate&quot;&gt;Extended Ramsey model with estimated discount rate&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#a-plan-for-a-slightly-more-realistic-model&quot; id=&quot;markdown-toc-a-plan-for-a-slightly-more-realistic-model&quot;&gt;A plan for a (slightly) more realistic model&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#weitzman-gollier-puzzle&quot; id=&quot;markdown-toc-weitzman-gollier-puzzle&quot;&gt;Weitzman-Gollier puzzle&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#some-arguments-against-prioritizing-improving-the-discount-rate-estimate&quot; id=&quot;markdown-toc-some-arguments-against-prioritizing-improving-the-discount-rate-estimate&quot;&gt;Some arguments against prioritizing improving the discount rate estimate&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#applying-the-importancetractabilityneglectedness-framework&quot; id=&quot;markdown-toc-applying-the-importancetractabilityneglectedness-framework&quot;&gt;Applying the importance/tractability/neglectedness framework&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#importance&quot; id=&quot;markdown-toc-importance&quot;&gt;Importance&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#tractability&quot; id=&quot;markdown-toc-tractability&quot;&gt;Tractability&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#neglectedness&quot; id=&quot;markdown-toc-neglectedness&quot;&gt;Neglectedness&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix-proof-that-spending-should-decrease-as-the-discount-rate-decreases&quot; id=&quot;markdown-toc-appendix-proof-that-spending-should-decrease-as-the-discount-rate-decreases&quot;&gt;Appendix: Proof that spending should decrease as the discount rate decreases&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;Effective altruists can become more effective by carefully considering how they should spread their altruistic consumption over time. This subject receives some attention in the EA community, but a lot of low hanging fruit still exists, and EAs could probably do substantially more good by further optimizing their consumption schedules (for our purposes, “consumption” refers to money spent trying to improve the world).&lt;/p&gt;

&lt;p&gt;So, how should altruists use their resources over time? In 1928, Frank Ramsey developed what is now known as the &lt;a href=&quot;https://plato.stanford.edu/entries/ramsey-economics/&quot;&gt;Ramsey model&lt;/a&gt;. In this model, a philanthropic actor has some stock of invested capital that earns interest over time. They want to know how to maximize utility by spending this capital over time. The key question is, at what rate should they spend to maximize utility?&lt;/p&gt;

&lt;p&gt;(Further suppose this philanthropic actor is the sole funder of a cause. If other actors also fund this cause, that substantially changes considerations because you have to account for how they spend their money&lt;sup id=&quot;fnref:25&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:25&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;. For the purposes of this essay, I will assume the cause we care about only has one funder, or that all funders can coordinate.)&lt;/p&gt;

&lt;p&gt;Specifically, we assume the actor’s capital grows according to a constant (risk-free) interest rate \(r\). Additionally, we discount future utility at some rate \(\delta\), so that if performing some action this year would produce 1 utility, next year it will only give us \(1 - \delta\) discounted utility. The actor then needs to decide at what rate to consume their capital.&lt;/p&gt;

&lt;p&gt;Total utility equals the sum of discounted utilities at each moment in time. In mathematical terms, we write it as&lt;/p&gt;

&lt;p&gt;\begin{align}
U = \int_0^\infty e^{-\delta t} u(c(t)) dt
\end{align}&lt;/p&gt;

&lt;p&gt;where c(t) gives the amount of resources to be consumed (that is, spent on altruistic endeavors) at time t, and u(c) gives utility of consumption.&lt;/p&gt;

&lt;p&gt;This model makes many simplifications—see &lt;a href=&quot;http://piketty.pse.ens.fr/files/Ramsey1928.pdf&quot;&gt;Ramsey (1928)&lt;/a&gt;&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; and &lt;a href=&quot;http://users.ox.ac.uk/~mert2255/papers/discounting.pdf&quot;&gt;Greaves (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; for a detailing of the required assumptions, of both an empirical and a philosophical nature. To keep this essay relatively simple, I will take the Ramsey model as given, but it should be noted that changing these assumptions could change the results.&lt;/p&gt;

&lt;p&gt;It is common to assume that actors have &lt;a href=&quot;https://en.wikipedia.org/wiki/Isoelastic_utility&quot;&gt;constant relative risk aversion&lt;/a&gt; (CRRA), which means their level of risk aversion doesn’t change based on how much money they have. Someone with logarithmic utility of consumption has CRRA, as does anyone whose utility function looks like \(u(c) = c^{1 - \eta}\) for some constant \(\eta\).&lt;/p&gt;

&lt;p&gt;An actor with CRRA maximizes utility by following this consumption schedule&lt;sup id=&quot;fnref:4:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;p&gt;\begin{align}
c(t) = \frac{r \eta - r + \delta}{\eta} e^{\frac{r - \delta}{\eta} t}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(r\) is the interest rate and \(\eta\) is elasticity of marginal utility. Higher \(\eta\) indicates greater risk aversion. \(\eta = 1\) corresponds to logarithmic utility.&lt;/p&gt;

&lt;p&gt;(Original result is due to &lt;a href=&quot;http://piketty.pse.ens.fr/files/Ramsey1928.pdf&quot;&gt;Ramsey (1928)&lt;/a&gt;, but credit to Philip Trammell&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; for this specific formulation.)&lt;/p&gt;

&lt;p&gt;The scale factor \(\frac{r \eta - r + \delta}{\eta}\) tells us what proportion of the portfolio to spend during each period in order to maximize utility. A higher discount rate \(\delta\) means we should spend more now, while a lower discount rate tells us to save more for later. Intuitively, if we discount the future more heavily, that means we care relatively less about future spending, so we should spend more now (and vice versa).&lt;/p&gt;

&lt;p&gt;According to the Ramsey model, following a different consumption schedule than the above results in sub-maximal utility. If we spend too much early on, we prevent our assets from growing as quickly as they should. And if we spend too little, we don’t reap sufficient benefits from our assets. Therefore, we would like to know the value of \(\delta\) so we know how to optimally spread our spending over time. (The parameters \(r\) and \(\eta\) matter as well, but in this essay, I will focus on \(\delta\).)&lt;/p&gt;

&lt;p&gt;If we have a pure time preference, that means we discount future utility because we consider the future less morally valuable, and not because of any empirical facts. Ramsey called a pure time preference “ethically indefensible.” But even if we do not admit any pure time preference, we may still discount the value of future resources for four core reasons:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;All resources become useless (I will refer to this as “economic nullification”).&lt;/li&gt;
  &lt;li&gt;We lose access to our own resources.&lt;/li&gt;
  &lt;li&gt;We continue to have access to our own resources, but do not use them in a way that our present selves would approve of.&lt;/li&gt;
  &lt;li&gt;The best interventions might become less cost-effective over time as they get more heavily funded, or might become more cost-effective as we learn more about how to do good.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(“Resources” can include money, stocks, gold, or any other valuable and spendable asset. I will mostly treat resources as equivalent to money.)&lt;/p&gt;

&lt;p&gt;In the next section, I explain why we might care about the long-run discount rate in addition to the current discount rate. In &lt;a href=&quot;#breaking-down-the-current-discount-rate&quot;&gt;“Breaking down the current discount rate”&lt;/a&gt;, I consider the current discount rate in terms of the above four core reasons and roughly estimate how much we might discount based on each reason. In &lt;a href=&quot;#breaking-down-the-long-run-discount-rate&quot;&gt;“Breaking down the long-run discount rate”&lt;/a&gt;, I do the same for the discount rate into the distant future. In &lt;a href=&quot;#can-we-change-the-discount-rate&quot;&gt;“Can we change the discount rate?”&lt;/a&gt;, I briefly investigate the value of reducing the discount rate as an effective altruistic activity. Similarly, in &lt;a href=&quot;#significance-of-mis-estimating-the-discount-rate&quot;&gt;“Significance of mis-estimating the discount rate”&lt;/a&gt;, I find that simply improving our estimate of the discount rate could possibly be a top effective altruist cause. Finally, the conclusion provides some takeaways and suggests promising areas for future research.&lt;/p&gt;

&lt;p&gt;In this essay, I deal with some complicated subjects that deserve a much more detailed treatment. I provide answers to questions whenever possible, but these answers should be interpreted as extremely preliminary guesses, not confident claims. The primary purpose of this essay is merely to provide a starting point for discussion and raise some important and neglected research questions.&lt;/p&gt;

&lt;p&gt;This essay addresses the philanthropic discount rate, referring specifically to the discount rate that effective altruists should use. This relates to the economic concept of the &lt;a href=&quot;http://www.lse.ac.uk/GranthamInstitute/faqs/what-are-social-discount-rates/&quot;&gt;social discount rate&lt;/a&gt;, which (to simplify) is the rate at which governments should discount the value of future spending. Effective altruists tend to have substantially different values and beliefs than governments, resulting in substantially different discount rates. But if we know the social discount rate, we can use it to “reverse-engineer” the philanthropic discount rate by subtracting out any factors governments use that we do not believe philanthropists should care about, and then adding in any factors governments tend to neglect (e.g., perhaps we believe most people underestimate the probability of extinction). For now, I will not attempt this approach, but this would make a good subject for future research. For a more detailed survey of the social discount rate and the considerations surrounding it, see &lt;a href=&quot;http://users.ox.ac.uk/~mert2255/papers/discounting.pdf&quot;&gt;Greaves (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:8:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;When attempting to make predictions, I will frequently refer to &lt;a href=&quot;https://www.metaculus.com/questions/4349/will-the-harvard-endowment-be-larger-in-2119-than-in-2019/&quot;&gt;Metaculus&lt;/a&gt; questions. Metaculus is a website that &lt;a href=&quot;https://www.metaculus.com/help/faq/#aim&quot;&gt;“poses questions about the occurrence of a variety of future events, on many timescales, to a community of participating predictors”&lt;/a&gt; with the aim of helping humanity make better predictions. It has a reasonably impressive &lt;a href=&quot;https://www.metaculus.com/questions/track-record/&quot;&gt;track record&lt;/a&gt;. Although Metaculus’ short-term track record might not &lt;a href=&quot;https://www.openphilanthropy.org/blog/how-feasible-long-range-forecasting&quot;&gt;extrapolate well&lt;/a&gt; to the long-term questions referenced in this essay, the aggregated predictions made by Metaculus are probably more reliable than uninformed guesses&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;. Metaculus predictions can change over time as more users make predictions, so the numbers I quote in this essay might not reflect the most up-to-date information. In order to avoid double-counting my personal opinion, I have not registered my own predictions on any of the linked Metaculus questions.&lt;/p&gt;

&lt;p&gt;Sjir Hoeijmakers, senior researcher at &lt;a href=&quot;https://founderspledge.com/&quot;&gt;Founders Pledge&lt;/a&gt;, &lt;a href=&quot;https://forum.effectivealtruism.org/posts/CfLoq8nJBzRARohtQ/the-case-for-investing-to-give-later&quot;&gt;has written a similar essay&lt;/a&gt; about how we should discount the future. I read his post before publishing this, but I wrote this essay before I knew he was working on the same topic, so any overlap in content is coincidental.&lt;/p&gt;

&lt;h2 id=&quot;significance-of-a-declining-long-run-discount-rate&quot;&gt;Significance of a declining long-run discount rate&lt;/h2&gt;

&lt;p&gt;The basic Ramsey model assumes a fixed discount rate. But it seems plausible that the discount rate declines over time. How does that affect how we should allocate our spending across time?&lt;/p&gt;

&lt;p&gt;In short, we should spend more when the discount rate is high, and decrease our rate of spending as the discount rate falls. See &lt;a href=&quot;#appendix-proof-that-spending-should-decrease-as-the-discount-rate-decreases&quot;&gt;Appendix&lt;/a&gt; for proof.&lt;/p&gt;

&lt;p&gt;The pace of this decline in spending heavily depends on model assumptions. If we use \(\delta(t) = \frac{\alpha}{t} + \delta_f\) (as in the &lt;a href=&quot;#appendix-proof-that-spending-should-decrease-as-the-discount-rate-decreases&quot;&gt;Appendix&lt;/a&gt;), the optimal consumption rate does not have a closed-form solution, but we can verify numerically that with reasonable parameters, the optimal rate at time t = 0 only slightly exceeds the optimal long-run rate (e.g., 0.11% vs. 0.10% when \(\delta(0) = 1%, \delta(\infty) = 0.1%\)). But if we use a discrete state-based model (as in Trammell&lt;sup id=&quot;fnref:5:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; section 3), under &lt;a href=&quot;https://philiptrammell.com/dpptool/h1_1__dr1_0.01__ir1_0.07__h2_1__dr2_0.001__ir2_0.07__dm_1__tr1-2_0.000001__tr2-1_0.001/&quot;&gt;some reasonable parameters&lt;/a&gt;, the current consumption rate equals the current discount rate.&lt;/p&gt;

&lt;p&gt;Given these reasonable but conflicting models, it is unclear how much we should consume today as a function of the current and long-run discount rates. More investigation is required, but until then, it makes sense to attempt to estimate both the current and long-run discount rates.&lt;/p&gt;

&lt;p&gt;Additionally, some arguments suggest that &lt;a href=&quot;https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1&quot;&gt;we do not live at a particularly influential time&lt;/a&gt;. If true, that means most estimates of the current discount rate are way too high, the current rate probably resembles the long-run rate, and the long-run rate should be used in calculating optimal consumption.&lt;/p&gt;

&lt;h1 id=&quot;breaking-down-the-current-discount-rate&quot;&gt;Breaking down the current discount rate&lt;/h1&gt;

&lt;p&gt;In this part, I examine some plausible reasons why each of the four types of events (economic nullification, expropriation, value drift, change in opportunities) could occur, and roughly reason about how they should factor into the discount rate.&lt;/p&gt;

&lt;h2 id=&quot;economic-nullification&quot;&gt;Economic nullification&lt;/h2&gt;

&lt;p&gt;An economic nullification event is one in which all our resources become worthless. Let’s break this down into three categories: extinction, superintelligent AI, and economic collapse. Other types of events might result in economic nullification, but these three seem the most significant.&lt;/p&gt;

&lt;h3 id=&quot;extinction&quot;&gt;Extinction&lt;/h3&gt;

&lt;p&gt;Even if we do not prioritize extinction risk reduction as a top cause area&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;, we should factor the probability of extinction into the discount rate. In possible futures where civilization goes extinct, we have no way of creating value.&lt;/p&gt;

&lt;p&gt;We only have very rough estimates of the probability of extinction. I will cite three sources that appear to give among the best-quality estimates we have right now.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Pamlin and Armstrong (2015), &lt;a href=&quot;https://www.researchgate.net/publication/291086909_12_Risks_that_threaten_human_civilisation_The_case_for_a_new_risk_category/link/569e051008ae16fdf07b0431/download&quot;&gt;12 Risks That Threaten Human Civilization&lt;/a&gt; estimated a 0.13% probability of extinction in the next century from all causes excluding AI, and a 0-10% chance of extinction due to AI&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
  &lt;li&gt;Sandberg and Bostrom (2008)’s &lt;a href=&quot;http://global-catastrophic-risks.com/docs/2008-1.pdf&quot;&gt;Global Catastrophic Risks Survey&lt;/a&gt; estimated a 19% probability of extinction before 2100, based on a survey of participants at the Global Catastrophic Risks Conference.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid=0&quot;&gt;“Database of existential risk estimates (or similar)”&lt;/a&gt;, a Google Doc compiled by Michael Aird, includes a list of predictions on the probability of extinction. As of 2020-06-19, these predictions (excluding the two I already cited) give a median annual probability of 0.13% and a mean of 0.20% (see &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1zP2gxZZXkmF6SlGKxvqqjwViy3mcK-_YzqCM6A8pDGI/edit?usp=sharing&quot;&gt;my copy of the sheet&lt;/a&gt; for calculations)&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These estimates translate into an annual extinction probability of 0.0013% to 0.26%, depending on which numbers we use.&lt;/p&gt;

&lt;p&gt;For more, see &lt;a href=&quot;http://eprints.lse.ac.uk/89506/1/Beard_Existential-Risk-Assessments_Accepted.pdf&quot;&gt;Rowe and Simon (2018), “Probabilities, methodologies and the evidence base in existential risk assessments.”&lt;/a&gt;, particularly the appendix, which provides a list of estimates of the probability of extinction or related events&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Michael Aird (2020), &lt;a href=&quot;https://forum.effectivealtruism.org/posts/JQQAQrunyGGhzE23a/database-of-existential-risk-estimates&quot;&gt;“Database of existential risk estimates”&lt;/a&gt; (an EA Forum post accompanying the above-linked spreadsheet), addresses the fact that we only have extremely rough estimates of the extinction probability. He reviews some of the implications of this fact, and ultimately concludes that attempting to construct such estimates is still worthwhile. I think he explains the relevant issues pretty well, so I won’t address this problem other than to say that I basically endorse Aird’s analysis.&lt;/p&gt;

&lt;h3 id=&quot;superintelligent-ai&quot;&gt;Superintelligent AI&lt;/h3&gt;

&lt;p&gt;If we develop a superintelligent AI system, this could &lt;a href=&quot;https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity&quot;&gt;result in extinction&lt;/a&gt;. Alternatively, it could result in such a fantastically positive outcome that any money or resources we have now become useless. Even though a “friendly” AI does not constitute an existential threat, it could still put us in a situation where everyone’s money loses its value, so we should include this possibility in the discount rate.&lt;/p&gt;

&lt;p&gt;AI Impacts &lt;a href=&quot;https://aiimpacts.org/ai-timeline-surveys/&quot;&gt;reviewed AI timeline surveys&lt;/a&gt;, in which AI experts estimated their probabilities of seeing human-level AI by a certain date. We can use these survey results to calculate the implied probability of artificial general intelligence P(AGI)&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Let’s take the 2013 FHI survey as an example. This survey gives a median estimated 10% chance of AGI by 2020 and 50% chance by 2050. A 10% chance between 2013 and 2020 suggests an annual probabliity of 1.37%; and a 50% chance between 2013 and 2050 implies a 1.11% annual probability.&lt;/p&gt;

&lt;p&gt;The 10% and 50% estimates given by each of the surveys reviewed by AI Impacts imply annual probabilities ranging from a minimum of 0.56% to a maximum of 1.78%, with a mean of 1.13% and a standard deviation of 3.2 percentage points.&lt;/p&gt;

&lt;p&gt;Three relatively recent surveys asked participants for predictions rather than probabilities, and these imply P(AGI) ranging from 0.51% to 1.78%.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/&quot;&gt;Metaculus predicts that AGI has a 50% chance of emerging by 2043&lt;/a&gt; (with 168 predictions), implying a 2.97% annual probability of AGI.&lt;/p&gt;

&lt;p&gt;A superintelligent AI could lead to an extremely bad outcome (extinction) or an extremely good one (post-scarcity), or it could land us somewhere in the middle, where we can still use our resources to improve the world, and therefore money has value. Or the AI might be able to use our accumulated resources to continue producing value—in fact, this seems likely. So we should only treat the probability of AGI as a discount insofar as we expect it to result in extinction or post-scarcity.&lt;/p&gt;

&lt;p&gt;What is the probability of an extreme outcome (good or bad)? Again, we do not have any good estimates of this. As an upper bound, we can simply assume a 100% chance that a superintelligent AI results in an extreme outcome. Combining this with the AI Impacts survey review gives an estimated 1.78% annual probability of an extreme outcome due to AI, equating to a 1.78% discount factor.&lt;/p&gt;

&lt;p&gt;As a lower bound, assume only extinction can result in extreme outcomes, and that the extreme upside (post-scarcity) cannot happen. Taking the upper end of the extinction risk estimate from Pamlin and Armstrong (2015) gives a 0.1% annual probability of extinction, and thus a 0.1% annual probability of an extreme outcome due to AI. So based on these estimates, our discount factor due to AI falls somewhere between 0.1% and 2.97% (or possibly lower), and this may largely or entirely overlap with the discount factor due to extinction.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.metaculus.com/questions/4118/will-there-be-a-positive-transition-to-a-world-with-radically-smarter-than-human-artificial-intelligence/&quot;&gt;Metaculus gives a 57% probability&lt;/a&gt; (with 77 predictions) that an AGI will lead to a “positive transition.” &lt;a href=&quot;https://link.springer.com/chapter/10.1007%2F978-3-319-26485-1_33&quot;&gt;Müller &amp;amp; Bostrom (2016)&lt;/a&gt;&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; surveyed AI experts and came up with a 78% probability on a similar resolution. This gives us some idea of to what extent the discount due to AGI overlaps with the discount due to extinction.&lt;/p&gt;

&lt;p&gt;We could spend time examining plausible AI scenarios and how these impact the discount rate, but I will move on for now. For more on predictions of AI timelines (and the problems thereof), see Muehlhauser (2015), &lt;a href=&quot;https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines&quot;&gt;What Do We Know about AI Timelines?&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&quot;economic-collapse&quot;&gt;Economic collapse&lt;/h3&gt;

&lt;p&gt;Money could become useless if the global economy experiences a catastrophic collapse, even if civilization ultimately recovers.&lt;/p&gt;

&lt;p&gt;Depending on the nature of the event, it may be possible to guard against an economic collapse. For example, hyperinflation destroys the value of cash and bonds, but might leave stocks, gold, and real estate relatively unaffected, so investors in these assets could still preserve (some of) their wealth.&lt;/p&gt;

&lt;p&gt;We have seen some countries experience severe economic turmoil, such as Germany after WWI and &lt;a href=&quot;https://en.wikipedia.org/wiki/Hyperinflation_in_Zimbabwe&quot;&gt;Zimbabwe in 2008&lt;/a&gt;, but these would not have resulted in complete loss of capital for a highly diversified investor (i.e., one who holds some gold or other real assets).&lt;/p&gt;

&lt;p&gt;Almost any severe economic collapse would merely result in a &lt;em&gt;near&lt;/em&gt; loss of all resources and not a &lt;em&gt;complete&lt;/em&gt; loss. We should only discount future worlds where we see a complete loss, because any partial loss of capital can get rolled into the interest rate.&lt;/p&gt;

&lt;p&gt;Pamlin and Armstrong (2015) include catastrophic economic collapse as one of their 12 risks that threaten civilization, but do not provide a probability estimate.&lt;/p&gt;

&lt;h2 id=&quot;expropriation-and-value-drift&quot;&gt;Expropriation and value drift&lt;/h2&gt;

&lt;p&gt;Obviously, expropriation and value drift are not the same thing. But over longer time periods, it is not always clear whether an old institution ceased to exist due to outside forces or because its leaders lost focus.&lt;/p&gt;

&lt;p&gt;I am not aware of any detailed investigations on the rate of institutional failure. Philip Trammell &lt;a href=&quot;https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/#articles-books-and-blog-posts-discussed-in-the-show&quot;&gt;stated on the 80,000 Hours Podcast&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;I did a cursory look at what seemed to me like the more relevant foundations and institutions that were set up over the past thousand years or something. […] I came up with a very tentative value drift/expropriation rate of half a percent per year for ones that were explicitly aiming to last a long time with a relatively well defined set of values.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;According to Sandberg (n.d.)&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;, nations have a 0.5% annual probability of ceasing to exist. Most institutions don’t last as long as nations, but an institution that’s designed to be long-lasting might outlast its sovereign country. So perhaps we could infer an institutional failure rate of somewhere around 0.5%.&lt;/p&gt;

&lt;h3 id=&quot;expropriation&quot;&gt;Expropriation&lt;/h3&gt;

&lt;p&gt;According to Dimson, Marsh, and Staunton’s &lt;a href=&quot;https://www.credit-suisse.com/media/assets/corporate/docs/about-us/media/media-release/2018/02/giry-summary-2018.pdf&quot;&gt;Global Investment Returns Yearbook 2018&lt;/a&gt; (henceforth “DMS”), from 1900 to 2018, only two major countries (out of 23) experienced a nationwide expropriation of government assets: Russia and China (in both cases because of a communist revolution). This gives a historical annual 0.05% probability of expropriation when countries are weighted by market capitalization (0.07% when countries are equal-weighted).&lt;/p&gt;

&lt;p&gt;Both expropriation events occurred in unstable countries that DMS classify as having been “emerging” at the time (defined as having a GDP per capita under $25,000, adjusted for inflation). Thus, it seems investors have some ability to predict in advance whether their country has a particularly high risk of expropriation. We can probably assume that developed countries such as the United States have an expropriation risk of less than 0.05% because no developed-country expropriations occurred in DMS’s sample.&lt;/p&gt;

&lt;p&gt;Note that some other countries (such as Cuba) did expropriate citizens’ funds, but are not included in DMS. DMS’s sample covers 98% of world market cap, so the remaining countries matter little on a cap-weighted basis. Furthermore, if investors can predict in advance that they live in a high-risk country, this holds doubly so for &lt;a href=&quot;https://en.wikipedia.org/wiki/Frontier_markets&quot;&gt;frontier markets&lt;/a&gt; like Cuba.&lt;/p&gt;

&lt;p&gt;So it seems the risk of nationwide expropriation in developed countries is so small that it’s a rounding error compared to other factors like value drift.&lt;/p&gt;

&lt;p&gt;What about the risk that your personal assets are expropriated? If governments only expropriate assets from certain people or institutions, the risk to any particular individual is relatively small, simply because that individual will probably not be among the targeted group. But as these sorts of events do not appear in stock market returns, we cannot estimate the risk based on DMS data, and the risk is harder to estimate in general. As individual expropriation happens fairly rarely, I would expect that investors experience greater risk from nationwide expropriation. As a naive approach, we could double the 0.05% figure from before to get a 0.1% all-in annual probability of expropriation, although I suspect this overstates the risk.&lt;/p&gt;

&lt;p&gt;More frequently, governments seize some but not all of citizens’ assets, for example when the United States government &lt;a href=&quot;https://en.wikipedia.org/wiki/Executive_Order_6102&quot;&gt;forced all citizens to sell their gold&lt;/a&gt; at below-market rates. Such events do not existentially threaten one’s financial position, so they should not be considered as part of the expropriation rate for our purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.metaculus.com/questions/4328/will-a-total-sum-amounting-to-at-least-10-of-the-money-in-all-us-dafs-be-expropriated-by-2100/&quot;&gt;Metaculus&lt;/a&gt; &lt;a href=&quot;https://www.metaculus.com/questions/4584/what--of-money-held-in-all-us-dafs-will-be-expropriated-by-2100/&quot;&gt;predicts&lt;/a&gt; that &lt;a href=&quot;https://en.wikipedia.org/wiki/Donor-advised_fund&quot;&gt;donor-advised funds (DAFs)&lt;/a&gt; have a somewhat higher probability of expropriation, although this is based on a limited number of predictions, and it only applies to philanthropists who use DAFs.&lt;/p&gt;

&lt;p&gt;Investors can protect against expropriation by domiciling their assets in multiple countries. Probably the safest legal way to do this is to buy foreign real estate, which is the most difficult asset for governments to expropriate. But in general, investors cannot easily shield their assets from expropriation. In &lt;a href=&quot;http://www.efficientfrontier.com/IFA/ifa3.htm&quot;&gt;Deep Risk&lt;/a&gt;, William Bernstein concludes that the benefits of avoiding expropriation probably do not justify the costs for individual investors. The same is probably true for philanthropists.&lt;/p&gt;

&lt;h3 id=&quot;value-drift&quot;&gt;Value drift&lt;/h3&gt;

&lt;p&gt;When discussing value drift, we must distinguish between individuals and institutions. Both types of actors must make decisions about how to use their money over time, but they experience substantially different considerations. Most obviously, individuals cannot continue donating money for multiple generations.&lt;/p&gt;

&lt;p&gt;For the purposes of this essay, we care more about the institutional rate of value drift:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Effective altruist institutions have much more money. Indeed, sufficiently wealthy individuals typically create institutions to manage their money.&lt;/li&gt;
  &lt;li&gt;Insofar as individuals have a higher value drift rate, they can mitigate this by giving their money to long-lived institutions. (Although for many individuals, most of their donations will come from future income, and donating future income now poses some challenges, to say the least.)&lt;/li&gt;
  &lt;li&gt;Individual effective altruists typically share values and goals with many other people. A single individual ceasing to donate to a cause almost never existentially threatens the goals of that cause.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That said, I will briefly address individual value drift. We don’t know much about it, but we have some information:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;According to the 2018 EA Survey, &lt;a href=&quot;https://forum.effectivealtruism.org/posts/fZ3Y4iYwt36Pjmwhz/ea-survey-2018-series-do-ea-survey-takers-keep-their-gwwc&quot;&gt;40% of Giving What We Can pledge-signers do not report keeping up with the pledge&lt;/a&gt; (although this is partially due to lack of reporting)&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://forum.effectivealtruism.org/posts/bGcKJiBt4HSSScF76/ea-survey-2018-series-how-long-do-eas-stay-in-ea&quot;&gt;An analysis of the 2014-2018 EA Surveys&lt;/a&gt; suggests about a 60% 4-5 year survival rate.&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://forum.effectivealtruism.org/posts/mZWFEFpyDs3R6hD3r/empirical-data-on-value-drift&quot;&gt;A poll of one individual’s contacts&lt;/a&gt; found a 45% 5-year survival rate.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each of these sources suggests something like a 10% annual value drift rate. This is much higher than any other rate estimated in this essay. On the bright side, one survey found that wealthier individuals tend to have a lower rate of value drift, which means the dollar-weighted value drift rate might not be quite as bad as 10%.&lt;/p&gt;

&lt;p&gt;For long-lived institutions, it’s hard to measure the value drift rate in isolation. We can more easily measure the combined expropriation/value drift rate. As discussed above, some preliminary evidence suggests a rate of about 0.5%. Further investigation could substantially refine this estimate.&lt;/p&gt;

&lt;h2 id=&quot;changes-in-opportunities&quot;&gt;Changes in opportunities&lt;/h2&gt;

&lt;p&gt;I’ve saved the best for last, because changes in opportunities appears to be the most important factor in the discount rate.&lt;/p&gt;

&lt;p&gt;First, I should note that it doesn’t really make sense to model the rate of changes in opportunities as part of the discount rate. Future &lt;em&gt;utility&lt;/em&gt; doesn’t become less valuable due to changes in opportunities; rather, &lt;em&gt;money&lt;/em&gt; becomes less (or more) effective at producing utility. It might make more sense to treat changes in opportunities as part of the utility function&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt;, or to create a separate parameter for it. Perhaps we can spend money on research to improve the value of future opportunities, and we could account for this. Unfortunately, that would probably mean we no longer have a closed-form solution for the optimal consumption rate. So for the sake of making the math easier, let’s pretend it makes sense to include changes in opportunities within the discount rate, and assume the rate of change is fixed and we can’t do anything to change it. A future project can relax this assumption and see how it changes results.&lt;/p&gt;

&lt;p&gt;Our top causes could get better over time as we learn more about how to do good, or they could get worse as the best causes become fully funded. We have some reason to believe both of these things are happening. Which effect is stronger?&lt;/p&gt;

&lt;p&gt;Let’s start by looking at GiveWell top charities, where we have a particularly good (although nowhere near perfect) idea of how much good they do.&lt;/p&gt;

&lt;p&gt;This table lists the most cost-effective charity for each year according to GiveWell’s &lt;a href=&quot;https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models&quot;&gt;estimates&lt;/a&gt;, in terms of cost per life-saved equivalent (CPLSE). The “real” column adjusts each CPLSE estimate to November 2015 dollars.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Year&lt;/th&gt;
      &lt;th&gt;Organization&lt;/th&gt;
      &lt;th&gt;CPLSE nominal&lt;/th&gt;
      &lt;th&gt;CPLSE real&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;2012&lt;/td&gt;
      &lt;td&gt;Against Malaria Foundation&lt;/td&gt;
      &lt;td&gt;$2004&lt;/td&gt;
      &lt;td&gt;$2066&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2013&lt;/td&gt;
      &lt;td&gt;Against Malaria Foundation&lt;/td&gt;
      &lt;td&gt;$3401&lt;/td&gt;
      &lt;td&gt;$3463&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2014&lt;/td&gt;
      &lt;td&gt;Deworm the World&lt;/td&gt;
      &lt;td&gt;$1625&lt;/td&gt;
      &lt;td&gt;$1633&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2015&lt;/td&gt;
      &lt;td&gt;Against Malaria Foundation&lt;/td&gt;
      &lt;td&gt;$1783&lt;/td&gt;
      &lt;td&gt;$1783&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2016&lt;/td&gt;
      &lt;td&gt;Deworm the World&lt;/td&gt;
      &lt;td&gt;$901&lt;/td&gt;
      &lt;td&gt;$886&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2017&lt;/td&gt;
      &lt;td&gt;Deworm the World&lt;/td&gt;
      &lt;td&gt;$851&lt;/td&gt;
      &lt;td&gt;$819&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2018&lt;/td&gt;
      &lt;td&gt;Deworm the World&lt;/td&gt;
      &lt;td&gt;$652&lt;/td&gt;
      &lt;td&gt;$592&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;2019&lt;/td&gt;
      &lt;td&gt;Deworm the World&lt;/td&gt;
      &lt;td&gt;$480&lt;/td&gt;
      &lt;td&gt;$443&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;a href=&quot;https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/&quot;&gt;We cannot take these expected value estimates literally&lt;/a&gt;, but they might tell us something about the direction of change.&lt;/p&gt;

&lt;p&gt;GiveWell does not provide cost-effectiveness estimate spreadsheets for earlier years, but its earlier estimates tended to be lower, e.g., &lt;a href=&quot;https://www.givewell.org/international/top-charities/villagereach/December-2009-review&quot;&gt;“under $1000 per infant death averted”&lt;/a&gt; for VillageReach in 2009. For a time, GiveWell’s estimates increased over time due to (according to GiveWell) excessive optimism in the earlier calculations. However, the estimates have been near-monotonically decreasing since 2013 (every year except 2014-2015). &lt;a href=&quot;https://www.metaculus.com/questions/2776/how-much-will-givewell-guess-it-will-cost-to-get-an-outcome-as-good-as-saving-a-life-at-the-end-of-2021/&quot;&gt;Metaculus predicts&lt;/a&gt; (with 117 predictions) the 2021 real cost-effectiveness estimate to lie between the values for 2018 and 2019, suggesting a positive but small change in cost. It &lt;a href=&quot;https://www.metaculus.com/questions/4332/how-much-will-givewell-guess-it-will-cost-to-get-an-outcome-as-good-as-saving-a-life-at-the-end-of-2031/&quot;&gt;predicts&lt;/a&gt; (with 49 predictions) that GiveWell’s 2031 real cost-effectiveness estimate will be $454, nearly the same as 2019, implying that Metaculus expects GiveWell’s estimates to stabilize.&lt;/p&gt;

&lt;p&gt;Has the increased cost-effectiveness come from an improvement in the top charities’ programs, or simply from changes in estimates? I did not examine this in detail, but according to GiveWell’s &lt;a href=&quot;https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models/changelog&quot;&gt;2018 changelog&lt;/a&gt;, the &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1jzS693Y-ZAIloQejlzSc3e3t7iPHyor1qt7HBjSVXhQ/edit#gid=509033857&quot;&gt;improvements in Deworm the World&lt;/a&gt; occurred primarily due to a reduction in cost per child dewormed per year. Perhaps we should classify this more as an operational improvement than as learning, but it falls in the same general category.&lt;/p&gt;

&lt;p&gt;What about the value of finding new top charities? According to GiveWell, its current recommended charities are probably more cost-effective than its 2011 top recommendation of VillageReach. Since 2014, GiveWell has not found any charities that it ranks as more cost-effective than Deworm the World, but we should expect some nontrivial probability that it finds one in the future.&lt;/p&gt;

&lt;p&gt;Other cause areas have a much weaker knowledge base than global poverty. Even if top global poverty charities were getting less cost-effective over time due to limited learning, I would still expect us to be able to find interventions in animal welfare or existential risk that work substantially better than our current best ideas. These cause areas probably have a relatively high annual “learning rate”, which we should subtract from the discount rate (possibly resulting in a negative discount).&lt;/p&gt;

&lt;p&gt;Under plausible assumptions, some cause areas could have a learning rate on the order of magnitude of 10% (translating to a -10% discount), or could have a 10% rate of opportunities disappearing.&lt;/p&gt;

&lt;h2 id=&quot;combined-estimate&quot;&gt;Combined estimate&lt;/h2&gt;

&lt;p&gt;This section summarizes all the estimates given so far. I came up with these based on limited information, and they should not be taken as reliable. But this can give us a starting point for thinking about the discount rate.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Category&lt;/th&gt;
      &lt;th&gt;Rate&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;extinction&lt;/td&gt;
      &lt;td&gt;0.001% – 0.2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;superintelligent AI&lt;/td&gt;
      &lt;td&gt;0.001% – 3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;economic collapse&lt;/td&gt;
      &lt;td&gt;?&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;expropriation&lt;/td&gt;
      &lt;td&gt;0% – 0.05%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;institutional value drift&lt;/td&gt;
      &lt;td&gt;0.5%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;individual value drift&lt;/td&gt;
      &lt;td&gt;10%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;changes in opportunities&lt;/td&gt;
      &lt;td&gt;-10% – 10%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Recall that the estimate for superintelligent AI does not indicate chance of developing AI, but the chance that AI is developed &lt;em&gt;and&lt;/em&gt; money becomes useless as a result.&lt;/p&gt;

&lt;p&gt;Adding these up gives an institutional discount rate of 0.5% – 2.3%, excluding the discount due to changes in opportunities. Introducing this extra discount dramatically widens the confidence interval.&lt;/p&gt;

&lt;p&gt;My current best guess:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Philanthropists who prioritize global poverty experience a slightly positive discount due to changes in opportunities, and probably expect a relatively low probability of extinction, suggesting an all-in discount rate of around 0.5% – 1%.&lt;/li&gt;
  &lt;li&gt;Philanthropists who prioritize more neglected cause areas experience a substantially positive learning rate, and therefore a negative all-in discount rate. This suggests consumption should be postponed until the learning rate substantially diminishes, although in practice, there is no clear line between “consumption” and “doing research to learn more about how to do good.”&lt;/li&gt;
&lt;/ol&gt;

&lt;h1 id=&quot;breaking-down-the-long-run-discount-rate&quot;&gt;Breaking down the long-run discount rate&lt;/h1&gt;

&lt;h2 id=&quot;economic-nullification-1&quot;&gt;Economic nullification&lt;/h2&gt;

&lt;p&gt;Again, let’s consider three possible causes of economic nullification: extinction, superintelligent AI, and economic collapse.&lt;/p&gt;

&lt;h3 id=&quot;extinction-1&quot;&gt;Extinction&lt;/h3&gt;

&lt;p&gt;If we use a moderately high estimate for the current probability of extinction (say, 0.2% per year), it seems implausible that this probability could remain at a similar level for thousands of years. A 0.2% annual extinction probability translates into a 1 in 500 million chance that humanity lasts longer than 10,000 years. Humanity has already survived for about 200,000 years, so on priors, this tiny probability seems extremely suspect.&lt;/p&gt;

&lt;p&gt;Pamlin and Armstrong (2015)’s more modest estimate of 0.0013% translates to a more plausible 88% chance of surviving for 10,000 years, and a 27% chance of making it 100,000 years.&lt;/p&gt;

&lt;p&gt;One of these three claims must be true:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The annual probability of extinction is quite low, on the order of 0.001% per year or less.&lt;/li&gt;
  &lt;li&gt;Currently, we have a relatively high probability of extinction, but if we survive through the current crucial period, then this probability will dramatically decrease.&lt;/li&gt;
  &lt;li&gt;The current relatively high probability of extinction will maintain indefinitely. Therefore, humanity is highly likely to go extinct over an “evolutionary” timespan (10,000 to 100,000 years), and all but guaranteed not to survive (something like 1 in a googol chance) over a “geological” time scale (10+ million years).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In &lt;a href=&quot;https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1&quot;&gt;“Are we living at the most influential time in history?”&lt;/a&gt; (2018), Will MacAskill offers some justification for (but does not strongly endorse) the first claim on this list. The second claim seems to represent the most common view among long-term-focused effective altruists.&lt;/p&gt;

&lt;p&gt;If we accept the first or second claim, this implies existential risk has nearly zero impact on the long-run discount rate. The third claim allows us to use a nontrivial long-term discount due to existential risk. I find it the least plausible of the three—not because of any particularly good inside-view argument, but because it seems unlikely on priors.&lt;/p&gt;

&lt;h3 id=&quot;superintelligent-ai-1&quot;&gt;Superintelligent AI&lt;/h3&gt;

&lt;p&gt;With AGI, we can construct the same ternary choice that we did with extinction:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;We have a low annual probability of developing AGI.&lt;/li&gt;
  &lt;li&gt;The probability is currently relatively high, but will decrease over time.&lt;/li&gt;
  &lt;li&gt;The probability is high and will remain high in perpetuity.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Again, I find the third option the least plausible. Surely if we have not developed superintelligent AI after 1000 years, there must be some fundamental barrier preventing us from building it. In this case, I find the first option implausible as well. Based on what we know about AI, it seems the probability that we develop it in the near future must be high (for our purposes, a 0.1% annual probability qualifies as high). The Open Philanthropy Project &lt;a href=&quot;https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity#Importance&quot;&gt;agrees with this view&lt;/a&gt;, claiming “a nontrivial likelihood (at least 10% with moderate robustness, and at least 1% with high robustness) that transformative AI will be developed within the next 20 years.”&lt;/p&gt;

&lt;p&gt;If we accept one of the first two claims, then we should use a low long-run discount rate due to the possibility of developing superintelligent AI.&lt;/p&gt;

&lt;h3 id=&quot;economic-collapse-1&quot;&gt;Economic collapse&lt;/h3&gt;

&lt;p&gt;Unlike in the previous cases, I find it at least somewhat plausible that the probability of catastrophic economic collapse could remain high in perpetuity. Over the past several thousand years, many parts of the world have experienced periods of extreme turmoil where most investors lost all of their assets. Although investors today can more easily diversify globally across many assets, this increased globalization plausibly also increases the probability of a worldwide collapse.&lt;/p&gt;

&lt;p&gt;Unlike extinction, and probably unlike the development of AGI, a global economic collapse could be a repeatable event. If civilization as we know it ends but humanity survives, we could slowly rebuild society and eventually re-establish an interconnected global economy. And if we can establish a global economy for a second time, it can probably also collapse for a second time. Perhaps civilization could experience 10,000-year long “mega cycles” of technological development, globalization, and collapse.&lt;/p&gt;

&lt;p&gt;This is not to say I am &lt;em&gt;confident&lt;/em&gt; that the future will look like this. I merely find it &lt;em&gt;somewhat plausible&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Let’s say we believe with 10% probability that the future will experience a catastrophic economic collapse on average once every 10,000 years. This translates into a 0.001% annual probability of economic collapse. This probably matters more than the long-run probability of extinction or AGI, but is still so small as to not be worth considering for our purposes.&lt;/p&gt;

&lt;h2 id=&quot;expropriation-and-value-drift-1&quot;&gt;Expropriation and value drift&lt;/h2&gt;

&lt;p&gt;Based on historical evidence, it appears that institutions’ ability to preserve themselves or their values follows something like an exponential distribution: as we look back further in time, we see dramatically fewer institutions from that time that still exist today. Thus, it seems plausible that the rate of value drift could remain substantially greater than zero in the long run.&lt;/p&gt;

&lt;p&gt;Expropriation/value drift might not follow an exponential curve—we know extremely little about this. An exponential distribution seems plausible on priors, but it also seems plausible that the rate could decrease over time as institutions learn more about how to preserve themselves. Similarly, organizations that avoid value drift will tend to gain power over time relative to those that don’t. On this basis, we might expect the value drift rate to decline over time as value-stable institutions gain an increasing share of the global market.&lt;/p&gt;

&lt;h2 id=&quot;changes-in-opportunities-1&quot;&gt;Changes in opportunities&lt;/h2&gt;

&lt;p&gt;In the long run, the learning rate must approach 0. There must be some best action to take, and we can never do better than that best action. Over time, we will gain increasing confidence in our ability to identify that best action. Either we eventually converge on the best action, or we hit some upper limit on how much it’s possible to learn. Either way, the learning rate must approach 0.&lt;/p&gt;

&lt;p&gt;We can also expect giving opportunities to get worse over time as the best opportunities become fully funded. The utility of donations might asymptote toward the utility of general consumption—that is, in the long run, you might not be able to do more good by donating money than you can by spending it on yourself. Or new opportunities might continue to emerge, and might even get better over time. It seems conceivable that they could continue getting better in perpetuity, although I’m not sure how that would work. But in any case, the available opportunities cannot get worse in perpetuity. Money might have less marginal utility in the future as people become better off, but the Ramsey model already accounts for this in the \(\eta\) parameter—for example, \(\eta = 1\) indicates logarithmic utility of money, which means exponentially growing people’s wealth only linearly increases utility.&lt;/p&gt;

&lt;h2 id=&quot;combined-estimate-1&quot;&gt;Combined estimate&lt;/h2&gt;

&lt;p&gt;In summary:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The outside view suggests a low long-run extinction rate.&lt;/li&gt;
  &lt;li&gt;It’s hard to say anything of substance about the long-run rate of economic collapse or expropriation/value drift.&lt;/li&gt;
  &lt;li&gt;It seems the rate of changes in opportunities must approach 0.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It seems plausible that value drift is the largest factor in the long run, which perhaps suggests a 0.5% long-run discount rate if we assume 0.5% value drift. But this estimate seems much weaker than the (already-weak) approximation for the current discount rate.&lt;/p&gt;

&lt;h1 id=&quot;can-we-change-the-discount-rate&quot;&gt;Can we change the discount rate?&lt;/h1&gt;

&lt;p&gt;So far, we have assumed we cannot change the discount rate. But the cause of existential risk reduction focuses on reducing the discount rate by decreasing the probability of extinction. Presumably we could also reduce the expropriation and value drift rates if we invested substantial effort into doing so.&lt;/p&gt;

&lt;h2 id=&quot;the-significance-of-reducing-value-drift&quot;&gt;The significance of reducing value drift&lt;/h2&gt;

&lt;p&gt;Effective altruists invest substantial effort in reducing existential risk (although, arguably, society at large does not invest nearly enough). But we know almost nothing about how to reduce value drift. &lt;a href=&quot;https://forum.effectivealtruism.org/posts/jG8pptGksBpzyTxYg/a-qualitative-analysis-of-value-drift-in-ea-1&quot;&gt;Some research has been done&lt;/a&gt; on value drift among individuals in the effective altruism community, but it’s highly preliminary, and I am not aware of any comparable research on institutional value drift.&lt;/p&gt;

&lt;p&gt;Arguably, existential risk matters a lot more than value drift. Even in the absence of any philanthropic intervention, people generally try to make life better for themselves. If humanity does not go extinct, a philanthropist’s values might eventually actualize, depending on their values and on the direction humanity takes.&lt;/p&gt;

&lt;p&gt;Under most (but not all) plausible value systems and beliefs about the future direction of humanity, existential risk looks more important than value drift. The extent to which it looks more important depends on how much better one expects the future world to be (conditional on non-extinction) with philanthropic intervention than with its default trajectory.&lt;/p&gt;

&lt;p&gt;A sampling of some beliefs that could affect how much one cares about value drift:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;If economic growth continues as it has but we do not see any transformative events (such as development of superintelligent AI), global poverty will probably disappear in the next few centuries, if not sooner.&lt;/li&gt;
  &lt;li&gt;Even if humanity eradicates global poverty, we might continue disvaluing non-human animals’ well-being and subjecting them to great unnecessary suffering. Philanthropic efforts in the near term could substantially alter this trajectory.&lt;/li&gt;
  &lt;li&gt;Some people, particularly people interested in AI safety, believe that if we avoid extinction, we will almost certainly develop a friendly AI which will carry all sentient life into paradise. If that’s true, we really only care about preventing extinction, and particularly about ensuring we don’t make an unfriendly AI.&lt;/li&gt;
  &lt;li&gt;It might be critically important to do a certain amount of AI safety research before AGI emerges, and this research might not happen without support from effective altruist donors.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Beliefs #1 and #3 imply relatively less concern about value drift (compared to extinction), while #2 and #4 imply relatively more.&lt;/p&gt;

&lt;p&gt;Note that even if you expect good outcomes to be realized in the long run, you still care about how value drift impacts philanthropists’ ability to do good in the next few decades or centuries.&lt;/p&gt;

&lt;p&gt;I do not think it is obvious that reducing the probability of extinction does more good per dollar than the value drift rate, which naively suggests the effective altruist community should invest relatively more into reducing value drift. But I find it plausible that, upon further analysis, it would become clear that existential risk matters much more.&lt;/p&gt;

&lt;p&gt;Aside: I spent some time constructing an explicit quantitative model of the significance of value drift versus existential risk. I will not reproduce the model here, but it bore out the intuition that the ratio (importance of value drift):(importance of extinction risk) is basically proportional to the ratio (welfare of future worlds by default):(welfare of future worlds with philanthropic intervention), with some consideration given to the probabilities of extinction and value drift.&lt;/p&gt;

&lt;h2 id=&quot;reducing-risk-by-creating-multiple-funds&quot;&gt;Reducing risk by creating multiple funds&lt;/h2&gt;

&lt;p&gt;Unlike self-interested investors, philanthropists don’t just care about how much money they have. They also care about the assets of other value-aligned people. This allows philanthropists to protect against certain risks in ways self-interested investors cannot.&lt;/p&gt;

&lt;p&gt;To mitigate expropriation risk, different value-aligned philanthropists can invest their assets in different countries. To some extent, this already happens automatically: if Alice lives in France and Bob lives in Australia, and they share the same values, they already naturally split their assets between the two countries. If, say, France undergoes a communist revolution and nationalizes all citizens’ assets, Bob still has his portfolio, so Alice and Bob have only lost half the money they care about. If enough value-aligned philanthropists exist across many countries, total expropriation can probably only occur in the case of an &lt;a href=&quot;#economic-nullification&quot;&gt;economic nullification&lt;/a&gt;-like event, such as the formation of a one-world communist government.&lt;/p&gt;

&lt;p&gt;The same applies to value drift. If a set of philanthropic investors share values but one member of the group becomes more selfish over time, only a small portion of the collective altruistic portfolio has been lost. It seems to me that the probability of value drift is mostly independent across individuals, although I can think of some exceptions (e.g., if ties weaken within the effective altruism community, this could increase the overall rate of value drift). Therefore, the probability of total value drift rapidly decreases as the number of philanthropists increases. But there’s still the possibility that the EA community as a whole could experience value drift.&lt;/p&gt;

&lt;p&gt;We should consider the special case where asset ownership is fat tailed—that is, a small number of altruists control almost all the wealth. In practice, wealth does follow a fat-tailed distribution, with the Open Philanthropy Project controlling a majority of (explicitly) effective altruist assets, and large donors constituting a much bigger fraction of the pie than small donors&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;. Asset concentration substantially increases the damage caused by expropriation or value drift. The larger philanthropists can mitigate this by giving their money to smaller actors, effectively diversifying against value drift/expropriation risk. Although gifts of this sort are technically feasible and do occur in small portions, large philanthropists rarely (if ever) distribute the majority of their assets to other value-aligned actors for the purpose of reducing concentration risk. I would guess they do not distribute their funds primarily because (1) large philanthropists do not trust others to persistently share their values, (2) they do not trust others to do a good job identifying the best giving opportunities, and (3) they do not take concentration risk particularly seriously. At the least, large philanthropists should take concentration risk more seriously, although I do not know what to do about the other two points.&lt;/p&gt;

&lt;p&gt;If large philanthropists do want to spread out their money, it makes sense that they should take care to ensure they only give it to competent, value-aligned associates.&lt;/p&gt;

&lt;p&gt;Alternatively, institutions can diversify by spinning off separate organizations. This avoids the competence and value-alignment problems because they can form the new organizations with existing staff members, but it introduces a new set of complications.&lt;/p&gt;

&lt;p&gt;Observe that even when assets are distributed across multiple funds, expropriation and value drift still reduce the expected rate of return on investments in a way that looking at historical market returns does not account for. This is a good trade—decreasing the discount rate and decreasing the investment rate by the same amount probably increases utility in most situations—but it isn’t as good as eliminating the risks entirely.&lt;/p&gt;

&lt;p&gt;Relatedly, wealthy individuals often create foundations to manage their donations, which (among other benefits) reduces value drift by providing checks on donation decisions (by involving paid staff in the decisions, or by psychologically reinforcing commitment to altruistic behavior). Converting wealthy-individual money into foundation money probably works extremely well at decreasing the value drift rate, and fortunately, it’s already common practice.&lt;/p&gt;

&lt;h2 id=&quot;what-about-individual-value-drift&quot;&gt;What about individual value drift?&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;#value-drift&quot;&gt;As we saw&lt;/a&gt;, the existing (limited) evidence suggests about a 10% value drift rate among individual effective altruists. When individuals stop donating, this does not constitute a complete loss of capital because other value-aligned altruists can continue to provide funding; but it does hurt the effective investment rate of return.&lt;/p&gt;

&lt;p&gt;Imagine if philanthropists could invest in an asset with 10 percentage points higher return than the market (at the same level of risk). That would represent a &lt;em&gt;phenomenal&lt;/em&gt; opportunity. But that’s exactly what we can get by reducing the value drift rate. We can’t get the individual value drift rate all the way down to 0%, but it’s so high right now that we could probably find a lot of impactful ways to reduce it. Reducing this rate from 10% to 5% might require less effort than reducing the probability of extinction from (say) 0.2% to 0.19%. These numbers are not based on any meaningful analysis, but they seem plausible given the extreme neglectedness of this cause area.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://forum.effectivealtruism.org/posts/jG8pptGksBpzyTxYg/a-qualitative-analysis-of-value-drift-in-ea-1#Future_research_directions&quot;&gt;Marisa Jurczyk offers some suggestions&lt;/a&gt; on future research that could help reduce individual value drift.&lt;/p&gt;

&lt;h1 id=&quot;significance-of-mis-estimating-the-discount-rate&quot;&gt;Significance of mis-estimating the discount rate&lt;/h1&gt;

&lt;p&gt;As &lt;a href=&quot;https://scholar.harvard.edu/files/weitzman/files/gamma_discounting.pdf&quot;&gt;Weitzman (2001)&lt;/a&gt;&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt; wrote, “the choice of an appropriate discount rate is one of the most critical problems in all of economics.” Changing the estimated discount rate substantially changes the implied optimal behavior.&lt;/p&gt;

&lt;p&gt;Some might argue that we simply cannot estimate the discount rate, and it remains fundamentally unknowable. While I agree that we have no idea what discount rate to use, I do not believe we should &lt;a href=&quot;https://www.lesswrong.com/posts/dLJv2CoRCgeC2mPgj/the-fallacy-of-gray&quot;&gt;equivocate&lt;/a&gt; between (1) the radically uncertain state of knowledge if we don’t think about the discount rate at all, (2) the highly uncertain state of knowledge if we think about it a little bit, and (3) what our state of knowledge could be if we invested much more in estimating the discount rate. Philanthropists’ behavior necessarily entails some (implicit) discount rate; it is better to use a poor estimate than no estimate at all.&lt;/p&gt;

&lt;p&gt;Aird (2020), &lt;a href=&quot;https://forum.effectivealtruism.org/posts/JQQAQrunyGGhzE23a/database-of-existential-risk-estimates#Why_this_database_may_be_valuable&quot;&gt;“Database of existential risk estimates”&lt;/a&gt;, argues for the importance of better estimating the probability of extinction. Our estimates for value drift and changes in opportunities appear even rougher than for extinction, so working on improving these might be easier and therefore more cost-effective.&lt;/p&gt;

&lt;p&gt;Some economic literature exists on estimating the discount rate (such as &lt;a href=&quot;https://scholar.harvard.edu/files/weitzman/files/gamma_discounting.pdf&quot;&gt;Weitzman (2001)&lt;/a&gt;&lt;sup id=&quot;fnref:7:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;, &lt;a href=&quot;http://www.econ.yale.edu/~nordhaus/homepage/OldWebFiles/DICEGAMS/dice_mss_072407_all.pdf&quot;&gt;Nordhaus (2007)&lt;/a&gt;&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;, and &lt;a href=&quot;https://www.brown.edu/Departments/Economics/Faculty/Matthew_Turner/ec1340/readings/Sternreview_full.pdf&quot;&gt;Stern (2007)&lt;/a&gt;&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt;), but philanthropists do not always discount for the same reasons as self-interested actors, so for our purposes, these estimates provide limited value.&lt;/p&gt;

&lt;p&gt;How much should we value marginal research on estimating the philanthropic discount rate?&lt;/p&gt;

&lt;h2 id=&quot;extended-ramsey-model-with-estimated-discount-rate&quot;&gt;Extended Ramsey model with estimated discount rate&lt;/h2&gt;

&lt;p&gt;Intuitively, it seems that mis-estimating the discount rate could result in substantially wrong decisions about how much to spend vs. save, and this could matter a lot. Some quantitative analysis with a simple model supports this intuition.&lt;/p&gt;

&lt;p&gt;In the &lt;a href=&quot;#introduction&quot;&gt;introduction&lt;/a&gt;, I presented the Ramsey model as a simple theoretical approach for determining how to spend resources over time. Let’s return to this model. Additionally, let’s assume we experience logarithmic utility of consumption, because doing so produces the simplest possible formula for the consumption schedule.&lt;/p&gt;

&lt;p&gt;An actor maximizes utility by following this consumption schedule&lt;sup id=&quot;fnref:4:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;p&gt;\begin{align}
c(t) = \delta e^{(r - \delta) t}
\end{align}&lt;/p&gt;

&lt;p&gt;\(\delta\) gives the proportion of assets to be consumed each period&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;, and \(e^{(r - \delta) t}\) tells us the size of the portfolio at time t (recall that r is the investment rate of return). According to the chosen set of assumptions, the optimal consumption rate exactly equals the discount rate.&lt;/p&gt;

&lt;p&gt;Suppose a philanthropist attempts to follow this optimal consumption schedule. Suppose they estimate the discount rate as \(\hat{\delta}\), which might differ from the true \(\delta\). In that case, the philanthropist’s total long-run utility is given by&lt;/p&gt;

&lt;p&gt;\begin{align}
U = \int_0^\infty e^{-\delta t} \log(\hat{\delta} e^{r - \hat{\delta} t}) dt
= \frac{\log(\hat{\delta})}{\delta} + \frac{r - \hat{\delta}}{\delta^2}
\end{align}&lt;/p&gt;

&lt;p&gt;To see how quickly utility increases as we move \(\hat{\delta}\) closer to \(\delta\), we should look at the derivative of utility with respect to \(\hat{\delta}\):&lt;/p&gt;

&lt;p&gt;\begin{align}
\frac{d U}{d \hat{\delta}} = \frac{1}{\hat{\delta} \delta} - \frac{1}{\delta^2}
\end{align}&lt;/p&gt;

&lt;p&gt;What does this mean, exactly?&lt;/p&gt;

&lt;p&gt;Suppose we have a choice between (1) moving \(\hat{\delta}\) closer to \(\delta\) or (2) improving how effectively we use money by changing our utility function from \(\log(c(t))\) to \(b \log(c(t))\) for some increasing “impact factor” \(b &amp;gt; 1\). When should we prefer (1) over (2)?&lt;/p&gt;

&lt;p&gt;We should prefer improving \(\hat{\delta}\) whenever utility increases faster by decreasing \(\mid \hat{\delta} - \delta \mid\) than by increasing b, that is, whenever \(\mid \frac{d U}{d \hat{\delta}} \mid  &amp;gt;  \mid \frac{d}{d b} b \cdot U \mid  =  \mid U \mid\) for some particular values of \(\delta, \hat{\delta}, r\) (using absolute values because we only care about the magnitude of change, not the direction).&lt;/p&gt;

&lt;p&gt;The formula for \(\frac{d U}{d \hat{\delta}}\) is hard to comprehend intuitively. But if we plug in some values for \(r\), \(\delta\), and \(\hat{\delta}\), we see that \(\mid \frac{d U}{d \hat{\delta}} \mid  &amp;gt;  \mid U \mid\) for most reasonable inputs. For example, \(r = 5\%, \delta = 0.5\%, \hat{\delta} = 0.8\%\) (a mis-estimate of 0.3 percentage points) gives \(\frac{d U}{d \hat{\delta}} = -15,000, U = -86\). A closer estimate of \(\hat{\delta} = 0.55\%\) gives \(\frac{d U}{d \hat{\delta}} = -3,600, U = -61\). Therefore, according to this model, improving \(\hat{\delta}\) looks highly effective.&lt;/p&gt;

&lt;p&gt;We also care about the rate at which we can improve \(\hat{\delta}\) and \(b\). Presumably, moving \(\hat{\delta}\) closer to \(\delta\) becomes something like exponentially more difficult over time—we could model this process as \(\hat{\delta} = \delta \pm e^{-a x}\), where \(x\) is effort spent researching the correct discount rate and \(a\) is some constant. Then we need a function for the difficulty of increasing the impact factor b, perhaps \(b(x) = \log(x)\).&lt;/p&gt;

&lt;p&gt;Ultimately, we would need a much more complicated formulation to somewhat-accurately model our ability to improve the discount rate, and we cannot draw strong conclusions from the basic Ramsey model. But in our simple model, \(\mid \frac{d U}{d \hat{\delta}} \mid\) is much larger than \(\mid U \mid\) for reasonable parameters, which does at least hint that improving our estimate of the discount rate—and adjusting our spending schedules accordingly—could be a highly effective way of increasing utility, especially given the weakness of our current estimates, and how much low-hanging fruit probably still exists. This preliminary result seems to justify spending a substantially larger fraction of altruistic resources on estimating \(\delta\).&lt;/p&gt;

&lt;h2 id=&quot;a-plan-for-a-slightly-more-realistic-model&quot;&gt;A plan for a (slightly) more realistic model&lt;/h2&gt;

&lt;p&gt;The model in the previous section assumes that a philanthropist can choose between saving and consumption at each moment in time, and can also spend out of an entirely separate budget to improve \(\hat{\delta}\). This makes the optimization problem easier, but doesn’t really make sense.&lt;/p&gt;

&lt;p&gt;Under a more realistic model, the philanthropist can choose between three options: (1) saving, (2) consumption, and (3) improving \(\hat{\delta}\). That is, research on estimating the discount rate comes out of the same budget as general consumption.&lt;/p&gt;

&lt;p&gt;Under this model, the philanthropist wishes to maximize&lt;/p&gt;

&lt;p&gt;\begin{align}
U = \int_0^\infty e^{-\delta t} u(c(t)) dt
\end{align}&lt;/p&gt;

&lt;p&gt;with the constraint that \(c(t)\) cannot be a function of \(\delta\), it can only be a function of \(\hat{\delta}\). Additionally, we can define a function \(\hat{\delta}(Y(t))\) giving the best estimate of \(\delta\) as a function of \(Y(t)\), where \(Y(t)\) gives cumulative spending on determining \(\delta\) up to time \(t\).&lt;/p&gt;

&lt;p&gt;Solving this problem requires stronger calculus skills than I possess, so I will leave it as an open question for future research.&lt;/p&gt;

&lt;p&gt;Some other useful model extensions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Allow the philanthropist to invest in risky assets. As a starting point, see Levhari and Srinivasan (1969), Optimal Savings Under Uncertainty.&lt;/li&gt;
  &lt;li&gt;Make the discount rate a function of resources spent on reducing it (such as via x-risk research). That is, \(U = \displaystyle\int_0^\infty e^{-\delta(c_2(t)) t} u(c_1(t)) dt\).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;weitzman-gollier-puzzle&quot;&gt;Weitzman-Gollier puzzle&lt;/h2&gt;

&lt;p&gt;According to &lt;a href=&quot;https://scholar.harvard.edu/files/weitzman/files/howshoulddistantfuture.pdf&quot;&gt;Gollier and Weitzman (2010)&lt;/a&gt;, in the face of uncertainty about the discount rate, “[t]he long run discount rate declines over time toward its lowest possible value.” There exists some disagreement in the economic literature as to whether the discount rate should trend toward its lowest or its highest possible value. This disagreement is known as the Weitzman-Gollier puzzle (WGP). I have not studied this disagreement well enough to have an informed opinion, but &lt;a href=&quot;http://users.ox.ac.uk/~mert2255/papers/discounting.pdf&quot;&gt;Greaves (2017)&lt;/a&gt;&lt;sup id=&quot;fnref:8:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; claims “there is a widespread consensus” that “something like” the lowest possible long-run discount rate should be used.&lt;/p&gt;

&lt;p&gt;How much we care about this puzzle for the purposes of this essay depends on how we interpret long-term discount rates. If current consumption is only a function of the current discount rate, then WGP doesn’t matter. If instead we believe that the long-run rate affects how much we should consume today, then Weitzman-Gollier becomes relevant. I already argued that we should expect the discount rate to &lt;a href=&quot;#breaking-down-the-long-run-discount-rate&quot;&gt;decline over time&lt;/a&gt; (e.g., as extinction risk decreases and institutions become more robust), so Weitzman-Gollier provides an additional argument in favor of this policy.&lt;/p&gt;

&lt;h2 id=&quot;some-arguments-against-prioritizing-improving-the-discount-rate-estimate&quot;&gt;Some arguments against prioritizing improving the discount rate estimate&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Argument from long-term convergence:&lt;/strong&gt; Over a sufficiently long time horizon, it seems our estimate will surely converge on the true discount rate, even if we don’t invest much in figuring it out. At that time, and in perpetuity after that, we can follow the optimal spending rate. If we prioritize figuring out \(\delta\) now, that only helps us from now until when we would have solved \(\delta\) anyway. (But on the other hand, improving our estimate in the short term could still increase utility by a lot.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argument from intuitive meaningfulness:&lt;/strong&gt; Improving our estimate of the discount rate feels somehow less &lt;em&gt;meaningful&lt;/em&gt; than actively reducing the discount rate (e.g., by reducing risk of extinction). In some sense, by improving our estimate, we aren’t really &lt;em&gt;doing&lt;/em&gt; anything. Obviously we do increase expected utility by better spreading out our spending over time, but this doesn’t feel like the same sort of benefit as improving the effectiveness of our spending, or expanding the community to increase the pool of donations. Even if the Ramsey model supports improving \(\hat{\delta}\) as possibly the most effective intervention, this model entails a lot of assumptions, so we should pay attention to intuitions that contradict the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argument from model uncertainty:&lt;/strong&gt; Causes like global poverty prevention look good across many models and even many value systems (although &lt;a href=&quot;https://mdickens.me/2015/12/20/are_givewell_top_charities_too_speculative/&quot;&gt;we don’t really know&lt;/a&gt; if global poverty prevention is even net positive). Under the Ramsey model, improving \(\hat{\delta}\) still looks good across a lot of value systems—it benefits you to improve the spending schedule no matter what utility function you use—but we don’t know if it holds up in non-Ramsey-like models. Furthermore, it’s a new idea that has not been subjected to much scrutiny.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argument from market efficiency:&lt;/strong&gt; According to the efficient market hypothesis (EMH), the correct discount rate should be embedded in market prices. Market forces don’t always apply to philanthropic actors, but it seems plausible that something like a weaker version of EMH might still hold. Thus, we might expect the “philanthropic market” to basically correctly determine the discount rate, even if no individual actor has high confidence in their particular estimate. On the other hand, in practice, the philanthropic market appears far less efficient than the for-profit sector (or else the effective altruist approach would be much more popular!).&lt;/p&gt;

&lt;h2 id=&quot;applying-the-importancetractabilityneglectedness-framework&quot;&gt;Applying the importance/tractability/neglectedness framework&lt;/h2&gt;

&lt;p&gt;Let’s qualitatively consider improving the discount rate and see how it fits in the &lt;a href=&quot;https://concepts.effectivealtruism.org/concepts/importance-neglectedness-tractability/&quot;&gt;importance/tractability/neglectedness framework&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;importance&quot;&gt;Importance&lt;/h3&gt;

&lt;p&gt;If we use philanthropic resources slightly too slowly, we lose out on the benefits of this marginal consumption, and continue losing out every year in perpetuity (or at least until we correct our estimate of the discount rate).&lt;/p&gt;

&lt;p&gt;If we use resources too quickly, this eats into potential investment returns, decreasing the size of our future portfolio and hamstringing philanthropists’ ability to do good in the future.&lt;/p&gt;

&lt;p&gt;Under the Ramsey model, slightly refining the discount rate estimate greatly increases utility. But the previous section does provide some arguments against the importance of a correct discount rate.&lt;/p&gt;

&lt;p&gt;Improving our estimate of the discount rate only matters in situations where we provide all the funding for a cause, or where we can coordinate with all (or most) other funders. If we only control a small portion of funds and other funders do not follow optimal consumption, then we simply want to bring overall spending closer to the optimal rate, which requires us to consume either all or none of our resources. In this situation, we do not need to exactly estimate the discount rate—we only need to know whether other funders use a discount that’s too low or too high. But we do care about the exact rate in smaller causes (probably including existential risk, and possibly farm animal welfare) where we can coordinate with other donors.&lt;/p&gt;

&lt;h3 id=&quot;tractability&quot;&gt;Tractability&lt;/h3&gt;

&lt;p&gt;Estimating the discount rate appears much easier than, say, ending global poverty. I can easily come up with several ways we could improve our estimate:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Better surveys or studies on the probability of extinction, or better attempts to synthesize an estimate out of existing surveys.&lt;/li&gt;
  &lt;li&gt;Research on historical movements and learn more about why they failed or succeeded.&lt;/li&gt;
  &lt;li&gt;Theoretical research on how philanthropists should consume as a function of the discount rate.&lt;/li&gt;
  &lt;li&gt;Theoretical research on how to break down the discount rate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This suggests we could substantially improve our estimate with relatively little effort.&lt;/p&gt;

&lt;h3 id=&quot;neglectedness&quot;&gt;Neglectedness&lt;/h3&gt;

&lt;p&gt;Some academic literature exists on estimating the discount rate, although much of this literature doesn’t entirely apply to effective altruists. Within EA, I am only aware of one prior attempt to estimate the discount rate (from Trammell&lt;sup id=&quot;fnref:5:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;), and this was only given as a rough guideline. Even within academia, one could fairly describe this area of research as neglected; within EA, it has barely even been mentioned. The sheer neglectedness of this issue suggests that even a tiny amount of effort could substantially improve our estimate.&lt;/p&gt;

&lt;p&gt;All things considered, it seems likely to me that the effective altruism community substantially under-invests in trying to determine the correct discount rate, but the simple extension to the Ramsey model perhaps overstates the case.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;In this essay, I have reviewed a number of philanthropic opportunities that, according to the simplistic Ramsey model, could substantially improve the world. Some of these are already widely discussed in the EA community, others receive a little attention, and some are barely known at all. These opportunities include:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Reducing existential risk.&lt;/li&gt;
  &lt;li&gt;Reducing individual value drift.&lt;/li&gt;
  &lt;li&gt;Improving the ability of individuals to delegate their income to value-stable institutions.&lt;/li&gt;
  &lt;li&gt;Making expropriation and value drift less threatening by spreading altruistic funds more evenly across actors and countries.&lt;/li&gt;
  &lt;li&gt;Reducing the institutional value drift/expropriation rate.&lt;/li&gt;
  &lt;li&gt;More accurately estimating the discount rate in order to know how best to use resources over time.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before writing this essay, I created some basic models of the cost-effectiveness of each of these. The models are sufficiently complicated, and provide sufficiently little explanatory value, that I will not present them here. Suffice it to say the models suggest that #6—improving the estimate of the discount rate—does the most good per dollar spent. Obviously this heavily depends on model assumptions (and my models made a lot of assumptions). The takeaway is that, based on what we currently know, any of these six opportunities could plausibly represent the best effective altruist cause right now.&lt;/p&gt;

&lt;p&gt;Let’s briefly address each of these opportunities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Existential risk&lt;/strong&gt; already receives much attention in the EA community, so I have little to add.&lt;/p&gt;

&lt;p&gt;A few EAs have written about &lt;strong&gt;individual value drift&lt;/strong&gt;, most notably &lt;a href=&quot;https://forum.effectivealtruism.org/posts/jG8pptGksBpzyTxYg/a-qualitative-analysis-of-value-drift-in-ea-1&quot;&gt;Marisa Jurczyk&lt;/a&gt;, who also provided some qualitative suggestions for how to reduce value drift. But, as Jurczyk noted, “[t]he study of EAs’ experiences with value drift is rather neglected, so further research is likely to be highly impactful and beneficial for the community.”&lt;/p&gt;

&lt;p&gt;If individuals want to &lt;strong&gt;delegate their donations to institutions&lt;/strong&gt;, they run into the problem that most of their donations come from future income, and they cannot move this income from the future to the present. Donors have &lt;a href=&quot;https://mdickens.me/2015/10/17/should_altruists_leverage_donations/&quot;&gt;a few options for “leveraging” donations&lt;/a&gt;, but none of them look particularly feasible. If we identified better ways to help individuals delegate their future donations, that could provide a lot of value.&lt;/p&gt;

&lt;p&gt;To my knowledge, the idea of &lt;strong&gt;spreading altruistic funds&lt;/strong&gt; has never been meaningfully discussed. It poses substantial challenges in practice, and I can see why institutions generally don’t want to do it. But I do think this idea has potential if we can figure out how to make it work.&lt;/p&gt;

&lt;p&gt;Many types of institutions, not just effective altruists, should care about &lt;strong&gt;reducing the institutional value drift/expropriation rate&lt;/strong&gt;. It’s possible that there already exists literature on this subject, although I’m not aware of any. More research in this area could prove highly valuable.&lt;/p&gt;

&lt;p&gt;I discussed &lt;strong&gt;improving our estimate of the discount rate&lt;/strong&gt; in &lt;a href=&quot;#significance-of-mis-estimating-the-discount-rate&quot;&gt;the previous section&lt;/a&gt;. According to my preliminary investigation, this could be a highly impactful area of research.&lt;/p&gt;

&lt;p&gt;This table provides my (extremely) rough guesses as to the &lt;a href=&quot;https://concepts.effectivealtruism.org/concepts/importance-neglectedness-tractability/&quot;&gt;importance, tractability, and neglectedness&lt;/a&gt; of these cause areas relative to each other. When I say, for example, that I believe existential risk has low neglectedness, that’s relative to the other causes on this list, not in general. (Existential risk is highly neglected compared to, say, developed-world education.)&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Importance&lt;/th&gt;
      &lt;th&gt;Tractability&lt;/th&gt;
      &lt;th&gt;Neglectedness&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;existential risk&lt;/td&gt;
      &lt;td&gt;high&lt;/td&gt;
      &lt;td&gt;low&lt;/td&gt;
      &lt;td&gt;low&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;individual value drift&lt;/td&gt;
      &lt;td&gt;low&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;delegating individuals’ donations&lt;/td&gt;
      &lt;td&gt;low&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;spreading altruistic funds&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
      &lt;td&gt;high&lt;/td&gt;
      &lt;td&gt;high&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;institutional value drift/expropriation&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;estimating discount rate&lt;/td&gt;
      &lt;td&gt;medium&lt;/td&gt;
      &lt;td&gt;high&lt;/td&gt;
      &lt;td&gt;high&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;(While revising this essay, I basically completely re-did this table twice. My opinion might completely change again by next week. So don’t treat these as well-informed guesses.)&lt;/p&gt;

&lt;p&gt;Finally, questions that merit future investigation:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;What implications do we get if we change various model assumptions?&lt;/li&gt;
  &lt;li&gt;How does the discount rate for effective altruists compare to the more traditional &lt;a href=&quot;http://www.lse.ac.uk/GranthamInstitute/faqs/what-are-social-discount-rates/&quot;&gt;social discount rate&lt;/a&gt;, and what is the significance of this comparison? What do we get if we attempt to derive our discount rate from the social discount rate?&lt;/li&gt;
  &lt;li&gt;How should we derive optimal consumption from the current and long-term discount rates?&lt;/li&gt;
  &lt;li&gt;What coefficient of relative risk aversion (\(\eta\)) and investment rate of return (r) should be used? Should we expect them to change in the long run?&lt;/li&gt;
  &lt;li&gt;Why do effective altruist organizations &lt;a href=&quot;https://80000hours.org/2018/10/2018-talent-gaps-survey/#they-report-quite-high-discount-rates-on-future-donations&quot;&gt;report such high discount rates&lt;/a&gt;?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Literature already exists on some of these, e.g., &lt;a href=&quot;https://doi.org/10.1016/B978-0-12-780850-5.50045-9&quot;&gt;Hakansson (1970)&lt;/a&gt;&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt; on modifying the Ramsey model to allow for risky investments. Future work could review some of this literature and draw implications for effective altruists’ behavior.&lt;/p&gt;

&lt;p&gt;Thanks to Mindy McTeigue and Philip Trammell for providing feedback on this essay.&lt;/p&gt;

&lt;h1 id=&quot;appendix-proof-that-spending-should-decrease-as-the-discount-rate-decreases&quot;&gt;Appendix: Proof that spending should decrease as the discount rate decreases&lt;/h1&gt;

&lt;p&gt;In the basic Ramsey model, the discount factor (call it D(t)) is given by \(e^{-\delta t}\). If we generalize the discount factor and allow it to obey any function, we can rewrite total utility as&lt;/p&gt;

&lt;p&gt;\begin{align}
U = \int_0^\infty D(t) u(c(t)) dt
\end{align}&lt;/p&gt;

&lt;p&gt;Let \(\delta(t)\) be the discount rate, where \(\delta(t) = -\frac{D&apos;(t)}{D(t)}\). (Observe that when \(D(t) = e^{-\delta t}\), \(\delta(t) = \delta\).) We want the discount rate to decline with time. Many possible functions could give a declining discount rate, but for the sake of illustration, let’s use \(\delta(t) = \frac{\alpha}{t} + \delta_f\). With this discount function, the discount rate gradually decreases over time to a minimum of \(\delta_f\). \(\alpha\) is a scale parameter that determines how rapidly the discount rate decreases. This corresponds to discount factor \(D(t) = t^{-\alpha} e^{\delta_f t}\). This is similar to the “Gamma discount” used by &lt;a href=&quot;https://scholar.harvard.edu/files/weitzman/files/gamma_discounting.pdf&quot;&gt;Weitzman (2001)&lt;/a&gt;&lt;sup id=&quot;fnref:7:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:19&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:19&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Under this discount rate, the optimal consumption rate declines over time. We can prove this by following the same proof steps as Trammell&lt;sup id=&quot;fnref:5:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;, but using a different discount factor.&lt;/p&gt;

&lt;p&gt;Trammell defines y(t) as “the resources allocated at time 0 for investment until, followed by spending at, t.” He observes that utility is maximized when the derivative of discounted utility with respect to y(t) equals some constant k, and then solves for y(t). If we solve for y(t) with a generalized time-dependent discount factor, we get&lt;/p&gt;

&lt;p&gt;\begin{align}
y(t) = k^{\frac{-1}{\eta}} D(t)^{\frac{1}{\eta}} e^{\frac{r - r \eta}{\eta} t}
\end{align}&lt;/p&gt;

&lt;p&gt;Observing that \(\int_0^\infty y(t) dt = 1\) allows us to solve for k. Plugging in \(D(t) = t^{-\alpha} e^{\delta_f t}\), solving the integral, and rearranging gives&lt;/p&gt;

&lt;p&gt;\begin{align}
k = \left( \frac{r \eta - r + \delta_f}{\eta} \right)^{-\alpha - \eta} \Gamma \left( 1 - \frac{\alpha}{\eta} \right)^\eta
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\Gamma(x)\) is the &lt;a href=&quot;https://en.wikipedia.org/wiki/Gamma_function&quot;&gt;Gamma function&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Plugging this into the formula for y(t) gives&lt;/p&gt;

&lt;p&gt;\begin{align}
y(t) = \left( \frac{r \eta - r + \delta_f}{\eta} \right)^\frac{\alpha + \eta}{\eta} \Gamma \left( 1 - \frac{\alpha}{\eta} \right)^{-1} t^\frac{-\alpha}{\eta} e^{\frac{r - r \eta - \delta_f}{\eta} t}
\end{align}&lt;/p&gt;

&lt;p&gt;Observe that \(c(t) = e^{rt} y(t)\). Therefore, c(t) is proportional to \(t^\frac{-\alpha}{\eta} e^{\frac{r - \delta_f}{\eta} t}\).&lt;/p&gt;

&lt;p&gt;Let \(c_v(t)\) be optimal consumption according to the variable-discount model, and similarly with \(c_f(t)\) for the fixed-discount model. Recall that \(c_f(t) \propto e^{\frac{r - \delta}{\eta}}\). If \(\delta = \delta_f\), then \(c_v(t) \propto t^\frac{-\alpha}{\eta} c_f(t)\). Therefore, \(c_v(t)\) grows more slowly than \(c_f(t)\) (when t &amp;gt; 1). The fixed-discount case has a constant consumption rate, so the variable-discount case must have a decreasing consumption rate. \(\blacksquare\)&lt;/p&gt;

&lt;p&gt;Some brief observations about this variable-discount model:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;When \(\alpha = 0\), it behaves identically to the fixed-discount case with \(\delta = \delta_f\).&lt;/li&gt;
  &lt;li&gt;Like the fixed-discount model, when \(r \eta - r + \delta_f \le 0\), the model suggests we should save indefinitely and never consume. This condition does not depend on t—that is, this model will never recommend consuming for a while and then ceasing consumption once the discount rate drops below a certain level.&lt;/li&gt;
  &lt;li&gt;Optimal consumption at time 0 is not defined because \(\lim\limits_{t \rightarrow 0^+} \delta(t) = \infty\).&lt;/li&gt;
  &lt;li&gt;Knowing optimal consumption \(c_v(t)\) does not tell us the optimal consumption &lt;em&gt;rate&lt;/em&gt;. I do not believe the optimal consumption rate has a closed-form solution.&lt;/li&gt;
  &lt;li&gt;The optimal consumption schedule depends on what one considers the “start time”, and one’s beliefs about optimal consumption can be inconsistent across time. &lt;a href=&quot;https://doi.org/10.2307/2118482&quot;&gt;Loewenstein and Prelec (1992)&lt;/a&gt;&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt; discuss this and other related issues. However, this problem does not seriously affect the model as I have portrayed it&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;
&lt;/ol&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The Ramsey model also depends on two other parameters: the interest rate and the elasticity of marginal utility of consumption. Those parameters are beyond the scope of this essay. &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:25&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I won’t go into detail, but we have good theoretical reasons to expect most actors to spend impatiently, so for most causes, we plausibly want to invest all our money because other actors already over-spend according to our values. See Trammell&lt;sup id=&quot;fnref:5:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; for more &lt;a href=&quot;#fnref:25&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Ramsey (1928). &lt;a href=&quot;http://piketty.pse.ens.fr/files/Ramsey1928.pdf&quot;&gt;A Mathematical Theory of Saving.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:4:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:4:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Greaves (2017). &lt;a href=&quot;http://users.ox.ac.uk/~mert2255/papers/discounting.pdf&quot;&gt;Discounting for public policy: A survey.&lt;/a&gt; &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:8:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:8:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Trammell (2020). &lt;a href=&quot;https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf&quot;&gt;Discounting for Patient Philanthropists&lt;/a&gt;. Working paper (unpublished). Accessed 2020-06-17. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:5:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:5:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:5:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;4&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:5:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;5&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;See Mullins (2018), &lt;a href=&quot;https://www.openphilanthropy.org/files/Blog/Mullins_Retrospective_Analysis_Longterm_Forecasts_Final_Report.pdf&quot;&gt;Retrospective Analysis of Long-Term Forecasts.&lt;/a&gt; This report found that “[a]ll forecast methodologies provide more accurate predictions than uninformed guesses.” &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In fact, if we do prioritize reducing existential risk, the model as presented in this essay does not work, because the discount rate due to extinction is no longer a constant. &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The report gave point probability estimates for all causes other than AI. But for AI, it gave a probability range, because “Artificial Intelligence is the global risk where least is known” (p. 164). &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I calculated these summary statistics without regard to the quality of the individual predictions. Two of the individual predictions provided lower bounds, not point predictions, but I treated them as point predictions anyway. &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Note that the provided hyperlink goes to a working version of the paper, because as far as I can tell, the final paper is not available for free online. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Some people distinguish between superintelligent AI and AGI, where the latter merely has human-level intelligence, not superhuman-level. For simplicity, I treat the two terms as interchangeable. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Müller &amp;amp; Bostrom (2016). &lt;a href=&quot;https://link.springer.com/chapter/10.1007%2F978-3-319-26485-1_33&quot;&gt;Future Progress in Artificial Intelligence: A Survey of Expert Opinion.&lt;/a&gt; &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Sandberg (n.d.). Everything is transitory, for sufficiently large values of “transitory.” &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Opportunities getting worse with increased spending is accounted for by the concavity of the utility function. But it might make sense to only include EA spending in the utility function, and treat other parties’ spending as a separate parameter. &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Wealth in general is fat-tailed, but it appears even more fat-tailed in EA, where the top one donor controls more than half the wealth. As of this writing, the richest person in the world controls “only” 0.03% of global wealth ($113 billion out of $361 trillion). &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Weitzman (2001). &lt;a href=&quot;https://scholar.harvard.edu/files/weitzman/files/gamma_discounting.pdf&quot;&gt;Gamma Discounting.&lt;/a&gt; &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:7:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; &lt;a href=&quot;#fnref:7:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Nordhaus (2007). &lt;a href=&quot;http://www.econ.yale.edu/~nordhaus/homepage/OldWebFiles/DICEGAMS/dice_mss_072407_all.pdf&quot;&gt;The Challenge of Global Warming: Economic Models and Environmental Policy.&lt;/a&gt; &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Stern Review (2007). &lt;a href=&quot;https://www.brown.edu/Departments/Economics/Faculty/Matthew_Turner/ec1340/readings/Sternreview_full.pdf&quot;&gt;The Economics of Climate Change.&lt;/a&gt; &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically this is a continuous model so there are no discrete periods, but you know what I mean. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hakansson (1970). &lt;a href=&quot;https://doi.org/10.1016/B978-0-12-780850-5.50045-9&quot;&gt;Optimal Investment and Consumption Strategies Under Risk for a Class of Utility Functions.&lt;/a&gt; &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:19&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;A proper discount factor should represent a probability distribution, which means it should have D(0) = 1 and should integrate to 1; but these details don’t matter for the purposes of this proof. &lt;a href=&quot;#fnref:19&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Loewenstein and Prelec (1992). &lt;a href=&quot;https://doi.org/10.2307/2118482&quot;&gt;Anomalies in Intertemporal Choice: Evidence and an Interpretation.&lt;/a&gt; &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The traditional problem of hyperbolic discounting is that it causes one’s preferences to change over time, even if no information changes. For example, given the choice between receiving $100 in six months’ time and $120 in seven months, people tend to choose the latter. But if you wait six months and then ask them if they’d rather receive $100 now or $120 in a month, they generally choose the former, even though fundamentally this is the exact same choice.&lt;/p&gt;

      &lt;p&gt;The model under discussion in this essay does not suffer from this problem. In traditional hyperbolic discounting, discount rates decline as a function of their &lt;em&gt;distance from the present&lt;/em&gt;. But in this model, discount rates decline as a result of changes in &lt;em&gt;facts about reality, independent of the time of consideration&lt;/em&gt;. That is, although discount rates decrease hyperbolically, actors at different points in time agree on the value of the discount rate at any particular time, because that discount rate is a function of the extinction/expropriation/value drift risk, not of pure time preference. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Do Theoretical Models Accurately Predict Optimal Leverage?</title>
				<pubDate>Sun, 21 Jun 2020 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2020/06/21/samuelson_share_predict_optimal_leverage/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/06/21/samuelson_share_predict_optimal_leverage/</guid>
                <description>
                  
                  
                  
                  &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;Previously, we talked about &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;how much leverage altruists should use&lt;/a&gt;. We looked at estimates of optimal leverage based on future projected returns, but this required making certain assumptions about how asset prices behave.&lt;/p&gt;

&lt;p&gt;In many ways, theoretical asset pricing models &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#caveats&quot;&gt;do not reflect&lt;/a&gt; how investments behave in practice. These models may overestimate how much leverage to use. We can learn something about the extent of this overestimation by backtesting leveraged portfolios on historical price data.&lt;/p&gt;

&lt;p&gt;In the backtests I performed, theoretically-optimal leverage according to the Samuelson share usually did not differ much from empirically optimal leverage according to backtests. However, the Samuelson share overestimated optimal leverage more often than it underestimated, and following the Samuelson share would have occasionally resulted in bankruptcy.&lt;/p&gt;

&lt;p&gt;After performing this analysis, I am now somewhat more confident that it makes sense for altruists to apply substantial leverage to their altruistic portfolios, although probably less than the Samuelson share. However, investors should ensure they understand what that entails—in backtests, optimally-leveraged portfolios usually encountered &amp;gt;90% drawdowns at some points.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This should not be taken as investment advice. Any given portfolio results are hypothetical and do not represent returns achieved by an actual investor.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#methodology&quot; id=&quot;markdown-toc-methodology&quot;&gt;Methodology&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#asset-classes&quot; id=&quot;markdown-toc-asset-classes&quot;&gt;Asset classes&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#asset-class-time-horizons&quot; id=&quot;markdown-toc-asset-class-time-horizons&quot;&gt;Asset class time horizons&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#asset-class-costs&quot; id=&quot;markdown-toc-asset-class-costs&quot;&gt;Asset class costs&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#results&quot; id=&quot;markdown-toc-results&quot;&gt;Results&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#comparing-samuelson-share-to-optimal-leverage&quot; id=&quot;markdown-toc-comparing-samuelson-share-to-optimal-leverage&quot;&gt;Comparing Samuelson share to optimal leverage&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#some-sample-portfolios&quot; id=&quot;markdown-toc-some-sample-portfolios&quot;&gt;Some sample portfolios&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#commentary&quot; id=&quot;markdown-toc-commentary&quot;&gt;Commentary&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#effects-of-under--and-over-estimating-optimal-leverage&quot; id=&quot;markdown-toc-effects-of-under--and-over-estimating-optimal-leverage&quot;&gt;Effects of under- and over-estimating optimal leverage&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#on-risk-aversion&quot; id=&quot;markdown-toc-on-risk-aversion&quot;&gt;On risk aversion&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#some-sample-portfolios-1&quot; id=&quot;markdown-toc-some-sample-portfolios-1&quot;&gt;Some sample portfolios&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#rebalance-timing-luck&quot; id=&quot;markdown-toc-rebalance-timing-luck&quot;&gt;Rebalance timing luck&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#path-dependence&quot; id=&quot;markdown-toc-path-dependence&quot;&gt;Path dependence&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#how-does-ex-ante-optimal-leverage-compare-to-ex-post&quot; id=&quot;markdown-toc-how-does-ex-ante-optimal-leverage-compare-to-ex-post&quot;&gt;How does ex-ante optimal leverage compare to ex-post?&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#some-sample-portfolios-2&quot; id=&quot;markdown-toc-some-sample-portfolios-2&quot;&gt;Some sample portfolios&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#commentary-1&quot; id=&quot;markdown-toc-commentary-1&quot;&gt;Commentary&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusions&quot; id=&quot;markdown-toc-conclusions&quot;&gt;Conclusions&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#appendix&quot; id=&quot;markdown-toc-appendix&quot;&gt;Appendix&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-a-more-sample-portfolios&quot; id=&quot;markdown-toc-appendix-a-more-sample-portfolios&quot;&gt;Appendix A: More sample portfolios&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-b-more-over--or-under-leveraged-sample-portfolios&quot; id=&quot;markdown-toc-appendix-b-more-over--or-under-leveraged-sample-portfolios&quot;&gt;Appendix B: More over- or under-leveraged sample portfolios&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#appendix-c-more-ex-ante-optimal-leverage-portfolios&quot; id=&quot;markdown-toc-appendix-c-more-ex-ante-optimal-leverage-portfolios&quot;&gt;Appendix C: More ex-ante optimal leverage portfolios&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;When I last &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;wrote about how much leverage altruists might use&lt;/a&gt;, I considered a number of &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#caveats&quot;&gt;caveats&lt;/a&gt; that make leverage look less appealing than theoretical models suggest.&lt;/p&gt;

&lt;p&gt;These caveats included:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Mean reversion&lt;/li&gt;
  &lt;li&gt;Left skew of investment returns&lt;/li&gt;
  &lt;li&gt;Unpredictability of future returns&lt;/li&gt;
  &lt;li&gt;Cost of leverage&lt;/li&gt;
  &lt;li&gt;Transaction costs&lt;/li&gt;
  &lt;li&gt;Taxes&lt;/li&gt;
  &lt;li&gt;Portfolios can’t be adjusted continuously&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(The &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#caveats&quot;&gt;original essay&lt;/a&gt; explained these caveats in more detail.)&lt;/p&gt;

&lt;p&gt;For some of these, we can learn how much they matter by running backtests on historical data. Backtests can only tell us so much about what to expect from future performance, but they can tell us a lot about how to deal with some of these caveats:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If mean reversion or left skew reduce the value of leverage, this will show up in backtests.&lt;/li&gt;
  &lt;li&gt;While theoretical models of leverage generally assume continuous rebalancing, backtests can use a more practical rebalancing schedule (such as monthly).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For some other caveats, backtests don’t definitively tell us how much they matter, but they can help:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;We can estimate costs of leverage, transaction costs, and taxes and subtract them from historical returns.&lt;/li&gt;
  &lt;li&gt;Backtests obviously cannot resolve the unpredictability of future return, but they can tell us how a portfolio would have performed if we had mis-estimated future returns or volatility.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;methodology&quot;&gt;Methodology&lt;/h1&gt;

&lt;p&gt;I performed backtests using publicly-available data from the &lt;a href=&quot;http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French data library&lt;/a&gt;, the &lt;a href=&quot;https://www.aqr.com/Insights/Datasets&quot;&gt;AQR data sets&lt;/a&gt;&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, and &lt;a href=&quot;https://doi.org/10.25397/eur.8152748&quot;&gt;government bond return series&lt;/a&gt; from Laurens Swinkels&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;The backtests built in these assumptions:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;The investor has logarithmic utility.&lt;/strong&gt; This is probably not true for most self-interested investors, but it may apply for altruistic investors. I discuss this in more detail &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#risk-aversion-for-altruistic-causes&quot;&gt;here&lt;/a&gt;. In this essay, I will primarily assume logarithmic utility because it appears reasonable and it makes the results easier to reason about.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Portfolios are rebalanced, and leverage is adjusted, on a monthly basis.&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The investor does not pay taxes.&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The investor can borrow at an interest rate pegged to the risk-free rate (RF).&lt;/strong&gt; I tested both with interest equal to RF and RF + 1%. Theoretical models generally assume leverage costs the risk-free rate, and some institutional investors may be able to get rates close to RF, while smaller investors generally have to pay more like RF + 1%.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Asset classes have &lt;a href=&quot;#asset-class-costs&quot;&gt;varying costs&lt;/a&gt;&lt;/strong&gt; depending on their liquidity, trading frequency, or how much investment managers tend to charge for them.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;asset-classes&quot;&gt;Asset classes&lt;/h2&gt;

&lt;p&gt;The simulated portfolios in this essay use various combinations of these six asset classes.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;US stocks&lt;/strong&gt;: A total market index including all US stocks.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;International stocks&lt;/strong&gt;: A total market index including all stocks in all countries other than the United States.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Bonds&lt;/strong&gt;: US 10-year Treasury bonds.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Commodities&lt;/strong&gt;: A passive commodity futures strategy that follows the methodology described in Levine et al. (2016), &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Commodities-for-the-Long-Run&quot;&gt;Commodities for the Long Run&lt;/a&gt;, using the &lt;a href=&quot;https://www.aqr.com/Insights/Datasets/Commodities-for-the-Long-Run-Index-Level-Data-Monthly&quot;&gt;associated data set&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;US value/momentum&lt;/strong&gt;: A strategy that buys the top decile of value stocks ranked by cash flow to price (CF/P) and the top decile of momentum stocks ranked by 12-2 momentum, allocating 50% to each of the two sub-strategies. I use CF/P as the value metric rather than the more typical book to price or earnings to price because cash flow is less subject to accounting details than book value or earnings. I made this choice before looking at any performance results. For more details on methodology, see Ken French’s descriptions: &lt;a href=&quot;http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/Data_Library/det_port_form_cfp.html&quot;&gt;value&lt;/a&gt;, &lt;a href=&quot;http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/Data_Library/det_10_port_form_pr_12_2.html&quot;&gt;momentum&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;TSMOM&lt;/strong&gt;: Time series momentum, also known as managed futures—a strategy that I identified as particularly promising in my &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#managed-futures-trendfollowing&quot;&gt;original essay on leverage for altruists&lt;/a&gt;&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;. The details of the methodology are described by Hurst et al. (2013), &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Demystifying-Managed-Futures&quot;&gt;Demystifying Managed Futures&lt;/a&gt;. Hypothetical historical performance was calculated using &lt;a href=&quot;https://www.aqr.com/Insights/Datasets/Time-Series-Momentum-Factors-Monthly&quot;&gt;the associated data set&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;asset-class-time-horizons&quot;&gt;Asset class time horizons&lt;/h2&gt;

&lt;p&gt;Some data sources cover shorter time periods than others. When running a backtest that combined multiple asset classes, I used the longest possible time horizon for which all asset classes have data.&lt;/p&gt;

&lt;p&gt;The asset classes have data available starting in the following years:&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Asset Class&lt;/th&gt;
      &lt;th&gt;Year&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;1926&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;international stocks&lt;/td&gt;
      &lt;td&gt;1990&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;bonds&lt;/td&gt;
      &lt;td&gt;1947&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commodities&lt;/td&gt;
      &lt;td&gt;1877&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US value/momentum&lt;/td&gt;
      &lt;td&gt;1952&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TSMOM&lt;/td&gt;
      &lt;td&gt;1985&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Each backtest ends in 2018. The exact end month can vary, but this has only a small effect on the results.&lt;/p&gt;

&lt;h2 id=&quot;asset-class-costs&quot;&gt;Asset class costs&lt;/h2&gt;

&lt;p&gt;I assumed the following costs for each asset class. These include both fees and transaction costs.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Asset Class&lt;/th&gt;
      &lt;th&gt;Cost&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;international stocks&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;bonds&lt;/td&gt;
      &lt;td&gt;0%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commodities&lt;/td&gt;
      &lt;td&gt;1%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US value/momentum&lt;/td&gt;
      &lt;td&gt;2%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TSMOM&lt;/td&gt;
      &lt;td&gt;5%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Stock index funds rarely trade so they have near zero trading costs, and many fund providers offer mutual funds and ETFs for extremely low fees.&lt;/p&gt;

&lt;p&gt;Bond funds have to trade more often as bonds reach maturity and have to roll over, but the Treasury bond market is highly liquid, so this incurs minimal trading costs.&lt;/p&gt;

&lt;p&gt;Commodity index funds must regularly trade old futures contracts for new ones, and generally charge higher management fees than stock or bond funds. I don’t have a great sense of how much these cost, so I assumed 1%. This probably underestimates the true cost, but this doesn’t matter much for our purposes.&lt;/p&gt;

&lt;p&gt;Frazzini et al. (2013)&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; found a real-world trading cost of 2.07% for a global value/momentum portfolio run from 1998 to 2011, and an estimated 1.37% trading cost for the same portfolio back to 1926. Furthermore, a value/momentum portfolio with cost-reducing optimizations cost only 1.11% with no reduction in gross return (gross return actually slightly increased, but this was probably due to noise). This analysis may overestimate trading costs relative to my simulated portfolios because it uses long-short strategies while I use long-only; but it definitely underestimates total costs in that it does not include fund fees. Concentrated value/momentum funds exist that cost as little as 0.6%, or investors can manage their own portfolios at the expense of more time spent. Quantitative value and momentum strategies require relatively little effort to trade manually—less than five hours per year&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;I have a lot of uncertainty about how much TSMOM costs. Such strategies are usually run by hedge funds that charge 2% fee plus 20% carry, which equates to about a 6% flat fee, but it’s possible to find cheaper implementations. The strategy also incurs substantial transaction costs. Hurst et al. (2013)&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; estimate them at 1-4% depending on implementation details. A 5% all-in cost falls on the low end of the plausible cost range, but does seem achievable by investors who seek out low-fee, cost-conscious funds.&lt;/p&gt;

&lt;p&gt;AXS Chesapeake Strategy Mutual Fund (EQCHX), a real-world fund that follows a strategy similar to TSMOM, provides &lt;a href=&quot;http://www.chesapeakecapital.com/performance/&quot;&gt;performance data&lt;/a&gt; net of fees over the period 1988 to 2020. (Disclaimer: At the time of this writing, I have money invested in this fund.) If we compare these net returns to the gross returns of the AQR TSMOM portfolio and apply leverage so they have equal variance, we see that EQCHX returned 6-7% less over the sample period. This suggests an all-in cost of 6-7%, although presumably part of this difference in return happened due to implementation differences (the two return series are only correlated with r=0.53).&lt;/p&gt;

&lt;h1 id=&quot;results&quot;&gt;Results&lt;/h1&gt;

&lt;h2 id=&quot;comparing-samuelson-share-to-optimal-leverage&quot;&gt;Comparing Samuelson share to optimal leverage&lt;/h2&gt;

&lt;p&gt;The Samuelson share (a term I adopted from &lt;a href=&quot;http://lifecycleinvesting.net/&quot;&gt;Lifecycle Investing&lt;/a&gt; by Ian Ayres and Barry Nalebuff) indicates how much leverage an investor should theoretically use. The Samuelson share for an investor with logarithmic utility of money is given by&lt;/p&gt;

&lt;p&gt;\begin{align}
\displaystyle\frac{\mu - R}{\sigma^2}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(\mu\) is the expected return of the investor’s portfolio, \(R\) is the cost of leverage, and \(\sigma\) is the standard deviation. This formula changes for investors with non-logarithmic utility, but for this essay, I will assume utility is logarithmic. This serves to simplify the analysis, and it seems like a reasonable best guess for how altruists’ utility functions behave.&lt;/p&gt;

&lt;p&gt;(Ayres and Nalebuff have a &lt;a href=&quot;http://lifecycleinvesting.net/resources.html&quot;&gt;Samuelson share calculator&lt;/a&gt; on their website.)&lt;/p&gt;

&lt;p&gt;For this analysis, I performed a series of backtests on a variety of portfolios and time horizons. For each portfolio, I first found the return and standard deviation of the portfolio. I used these two numbers to calculate the theoretically-optimal Samuelson share. Then I re-ran the backtest using varying amounts of leverage to find the level that maximized geometric return, which is equivalent to maximizing utility for a logarithmic utility function. I refer to this as “empirically optimal leverage” in the sense that it maximized geometric return over the sample. This does not say anything about its optimality out of sample.&lt;/p&gt;

&lt;p&gt;Note that this calculation of the Samuelson share requires perfect foresight—it assumes we know the return and standard deviation over the whole period, which we obviously cannot know in advance. So the Samuelson share does not represent how much leverage we would have thought to use if we had invested over the period. Instead, it represents the theoretically-optimal leverage according to the standard model of how asset prices operate. The point of this analysis is to tell us how historical performance differed from what the theoretical model predicts.&lt;/p&gt;

&lt;p&gt;While this analysis looks at past returns, this is not to suggest that future returns will look the same. While we cannot extrapolate returns from these backtests, we &lt;em&gt;can&lt;/em&gt; extrapolate knowledge about how well theoretical models predict optimal leverage, insofar as we expect asset prices to behave similarly in the future to how they have in the past (in the shapes of their distributions, not in their returns).&lt;/p&gt;

&lt;h3 id=&quot;some-sample-portfolios&quot;&gt;Some sample portfolios&lt;/h3&gt;

&lt;p&gt;These tables show statistics for simulated portfolios taken 1952-2018, including US stocks, bonds, and commodities. I assume leverage costs RF + 1%, and take the costs for each asset class given in &lt;a href=&quot;#asset-class-costs&quot;&gt;Asset class costs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Columns are defined as follows:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Lev&lt;/strong&gt;: Empirically optimal leverage.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Sam&lt;/strong&gt;: Theoretically optimal Samuelson share, calculated using the return, risk-free rate, and standard deviation over the sample period.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Sharpe&lt;/strong&gt;: &lt;a href=&quot;https://en.wikipedia.org/wiki/Sharpe_ratio&quot;&gt;Sharpe ratio&lt;/a&gt;, or risk-adjusted return, calculated as \(\frac{\mu - RF}{\sigma}\).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;CAGR&lt;/strong&gt;: Compound annual growth rate, i.e., geometric return.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Stdev&lt;/strong&gt;: Annualized standard deviation of monthly returns.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;MaxDD&lt;/strong&gt;: Maximum drawdown experienced over the sample period.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Table 1:&lt;/strong&gt; Summary statistics, 1952-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;2.66&lt;/td&gt;
      &lt;td&gt;0.27&lt;/td&gt;
      &lt;td&gt;15.1&lt;/td&gt;
      &lt;td&gt;39.0&lt;/td&gt;
      &lt;td&gt;93%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;bonds&lt;/td&gt;
      &lt;td&gt;1.32&lt;/td&gt;
      &lt;td&gt;0.15&lt;/td&gt;
      &lt;td&gt;5.8&lt;/td&gt;
      &lt;td&gt;9.8&lt;/td&gt;
      &lt;td&gt;25%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commodities&lt;/td&gt;
      &lt;td&gt;1.34&lt;/td&gt;
      &lt;td&gt;0.15&lt;/td&gt;
      &lt;td&gt;7.0&lt;/td&gt;
      &lt;td&gt;17.9&lt;/td&gt;
      &lt;td&gt;68%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US val/mom&lt;/td&gt;
      &lt;td&gt;2.72&lt;/td&gt;
      &lt;td&gt;0.44&lt;/td&gt;
      &lt;td&gt;28.8&lt;/td&gt;
      &lt;td&gt;53.7&lt;/td&gt;
      &lt;td&gt;96%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;60% stock, 40% bond&lt;/td&gt;
      &lt;td&gt;4.29&lt;/td&gt;
      &lt;td&gt;0.26&lt;/td&gt;
      &lt;td&gt;15.3&lt;/td&gt;
      &lt;td&gt;40.6&lt;/td&gt;
      &lt;td&gt;93%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;60% val/mom, 40% bond&lt;/td&gt;
      &lt;td&gt;6.63&lt;/td&gt;
      &lt;td&gt;0.41&lt;/td&gt;
      &lt;td&gt;27.7&lt;/td&gt;
      &lt;td&gt;55.2&lt;/td&gt;
      &lt;td&gt;98%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% stock, 40% bond, 20% com&lt;/td&gt;
      &lt;td&gt;5.46&lt;/td&gt;
      &lt;td&gt;0.27&lt;/td&gt;
      &lt;td&gt;15.8&lt;/td&gt;
      &lt;td&gt;40.4&lt;/td&gt;
      &lt;td&gt;90%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% val/mom, 40% bond, 20% com&lt;/td&gt;
      &lt;td&gt;5.89&lt;/td&gt;
      &lt;td&gt;0.43&lt;/td&gt;
      &lt;td&gt;28.5&lt;/td&gt;
      &lt;td&gt;53.4&lt;/td&gt;
      &lt;td&gt;97%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;strong&gt;Table 2:&lt;/strong&gt; Optimal leverage and Samuelson share, 1952-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sam&lt;/th&gt;
      &lt;th&gt;Lev-Sam&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;2.66&lt;/td&gt;
      &lt;td&gt;2.57&lt;/td&gt;
      &lt;td&gt;0.09&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;bonds&lt;/td&gt;
      &lt;td&gt;1.32&lt;/td&gt;
      &lt;td&gt;0.82&lt;/td&gt;
      &lt;td&gt;0.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commodities&lt;/td&gt;
      &lt;td&gt;1.34&lt;/td&gt;
      &lt;td&gt;0.84&lt;/td&gt;
      &lt;td&gt;0.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US val/mom&lt;/td&gt;
      &lt;td&gt;2.72&lt;/td&gt;
      &lt;td&gt;3.11&lt;/td&gt;
      &lt;td&gt;-0.39&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;60% stock, 40% bond&lt;/td&gt;
      &lt;td&gt;4.29&lt;/td&gt;
      &lt;td&gt;4.24&lt;/td&gt;
      &lt;td&gt;0.05&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;60% val/mom, 40% bond&lt;/td&gt;
      &lt;td&gt;6.63&lt;/td&gt;
      &lt;td&gt;7.59&lt;/td&gt;
      &lt;td&gt;-0.96&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% US stock, 40% bond, 20% com&lt;/td&gt;
      &lt;td&gt;5.46&lt;/td&gt;
      &lt;td&gt;5.72&lt;/td&gt;
      &lt;td&gt;-0.26&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% val/mom, 40% bond, 20% com&lt;/td&gt;
      &lt;td&gt;5.89&lt;/td&gt;
      &lt;td&gt;7.19&lt;/td&gt;
      &lt;td&gt;-1.3&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;The following tables give summary statistics for simulated portfolios taken 1985-2018, including time-series momentum (TSMOM). In my previous essay on &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/&quot;&gt;leverage for altruists&lt;/a&gt;, I presented two key potentially-market-beating strategies: &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#improving-on-conventional-investing-wisdom&quot;&gt;value/momentum&lt;/a&gt; and &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#managed-futures-trendfollowing&quot;&gt;managed futures&lt;/a&gt; (also known as time series momentum (TSMOM)&lt;sup id=&quot;fnref:5:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;). We would like to see how these strategies hypothetically would have performed with optimal leverage, and how optimal leverage compares to Samuelson share over the sample period. Note that the methodologies used here do not perfectly match the methodologies I used to derive hypothetical returns in my previous essay, so the numbers will differ somewhat.&lt;/p&gt;

&lt;p&gt;These results cover a shorter time horizon, so they tell us less about how these strategies might have performed in various economic regimes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 3:&lt;/strong&gt; Summary statistics, 1985-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;2.75&lt;/td&gt;
      &lt;td&gt;0.31&lt;/td&gt;
      &lt;td&gt;16.7&lt;/td&gt;
      &lt;td&gt;41.4&lt;/td&gt;
      &lt;td&gt;93%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US val/mom&lt;/td&gt;
      &lt;td&gt;2.40&lt;/td&gt;
      &lt;td&gt;0.38&lt;/td&gt;
      &lt;td&gt;22.0&lt;/td&gt;
      &lt;td&gt;47.3&lt;/td&gt;
      &lt;td&gt;92%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TSMOM&lt;/td&gt;
      &lt;td&gt;5.91&lt;/td&gt;
      &lt;td&gt;0.43&lt;/td&gt;
      &lt;td&gt;35.2&lt;/td&gt;
      &lt;td&gt;72.2&lt;/td&gt;
      &lt;td&gt;95%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;50% val/mom, 50% TSMOM&lt;/td&gt;
      &lt;td&gt;4.38&lt;/td&gt;
      &lt;td&gt;0.68&lt;/td&gt;
      &lt;td&gt;39.0&lt;/td&gt;
      &lt;td&gt;50.9&lt;/td&gt;
      &lt;td&gt;89%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;strong&gt;Table 4:&lt;/strong&gt; Optimal leverage and Samuelson share, 1985-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sam&lt;/th&gt;
      &lt;th&gt;Lev-Sam&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;2.75&lt;/td&gt;
      &lt;td&gt;2.93&lt;/td&gt;
      &lt;td&gt;-0.18&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US val/mom&lt;/td&gt;
      &lt;td&gt;2.40&lt;/td&gt;
      &lt;td&gt;2.67&lt;/td&gt;
      &lt;td&gt;-0.27&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TSMOM&lt;/td&gt;
      &lt;td&gt;5.91&lt;/td&gt;
      &lt;td&gt;5.34&lt;/td&gt;
      &lt;td&gt;0.57&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;50% val/mom, 50% TSMOM&lt;/td&gt;
      &lt;td&gt;4.38&lt;/td&gt;
      &lt;td&gt;7.33&lt;/td&gt;
      &lt;td&gt;-2.95&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Finally, let’s look at the results for US stocks 1927-2018, a period that includes the Great Depression.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 5:&lt;/strong&gt; Summary statistics, 1927-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;1.91&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;11.8&lt;/td&gt;
      &lt;td&gt;35.2&lt;/td&gt;
      &lt;td&gt;98%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;strong&gt;Table 6:&lt;/strong&gt; Optimal leverage and Samuelson share, 1927-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sam&lt;/th&gt;
      &lt;th&gt;Lev-Sam&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;1.91&lt;/td&gt;
      &lt;td&gt;1.60&lt;/td&gt;
      &lt;td&gt;0.31&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;I present more hypothetical portfolios in &lt;a href=&quot;#appendix-a-more-sample-portfolios&quot;&gt;Appendix A&lt;/a&gt;, including ones constructed using international equities.&lt;/p&gt;

&lt;p&gt;I also tested all of the above portfolios with the assumptions that no asset classes have fees and leverage only costs as much as the risk-free rate. Samuelson share and optimal leverage were both uniformly lower. Removing asset class fees appeared to slightly reduce the difference between Samuelson share and optimal leverage, but reducing the cost of leverage did not appear to matter.&lt;/p&gt;

&lt;h3 id=&quot;commentary&quot;&gt;Commentary&lt;/h3&gt;

&lt;p&gt;First, how does Samuelson share compare to optimal leverage?&lt;/p&gt;

&lt;p&gt;I do not believe we can make any statistically significant claims from this backtest. The above results only include five asset classes, which means no matter how many portfolios we construct, we only have five independent data points&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;. And in fact, not all of the asset classes are independent—US stocks and US val/mom, for example, are positively correlated. We would like to know whether there exists a statistically significant difference between optimal leverage and the Samuelson share, but we can’t know that. The best we can do is make some general observations.&lt;/p&gt;

&lt;p&gt;For most hypothetical portfolios, Samuelson share did a moderately good job of approximating optimal leverage. Based on this sample, the Samuelson share appears about equally likely to over- or under-estimate optimal leverage. However, it sometimes overestimated leverage by a large margin, such as for the “50% US val/mom, 50% TSMOM” portfolio. As we will discuss in the next section, overestimating leverage can lead to catastrophic outcomes. It appears that using the Samuelson share to determine optimal leverage (even with perfect foresight) has a non-trivial chance of producing a portfolio that will lose greater than 100% of its value in a bad market crash.&lt;/p&gt;

&lt;p&gt;On the other hand, Samuelson share does a better job of approximating optimal leverage than I expected. The Samuelson share embeds assumptions that market returns follow a normal distribution and that each period is independent. In reality, the returns distribution show a left skew, and prices tend to mean revert. But, at least based on this limited sample, it does not appear that the Samuelson share &lt;em&gt;consistently&lt;/em&gt; overestimates optimal leverage, although it does overestimate sometimes. In &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#deviations-from-the-theoretical-model&quot;&gt;my previous essay on leverage&lt;/a&gt;, I suggested that investors could use half as much leverage as the Samuelson share suggests, an idea taken from MacLean et al. (2010), &lt;a href=&quot;https://www.stat.berkeley.edu/~aldous/157/Papers/Good_Bad_Kelly.pdf&quot;&gt;Good and bad properties of the Kelly criterion&lt;/a&gt;. But this backtest seems to suggest that this rule of thumb may be too conservative. Of course, this backtest does not tell us what to expect from future returns, but it does eliminate certain classes of concerns, such as the left skew of investment returns (unless we expect future returns to have a significantly stronger left skew than past returns did, which I find implausible).&lt;/p&gt;

&lt;p&gt;Next, observe that all optimally-leveraged portfolios (except bonds) experienced large drawdowns, usually exceeding 90%. (Leveraged US equities would have experienced a 98% drawdown during the Great Depression!)&lt;/p&gt;

&lt;p&gt;This chart shows a zoomed-in view of optimally-leveraged US stocks’ performance during the Great Depression, with the price on 1926-07-01 (the first day of the sample) normalized to $1:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/US-stocks-drawdown-1929.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;In the global financial crisis of 2008, this portfolio would have lost over 75% of its value:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/US-stocks-drawdown-2008.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Using optimal leverage according to the 1952-2018 backtest (instead of 1927-2018) would have produced an even larger drawdown in 2008.&lt;/p&gt;

&lt;h2 id=&quot;effects-of-under--and-over-estimating-optimal-leverage&quot;&gt;Effects of under- and over-estimating optimal leverage&lt;/h2&gt;

&lt;p&gt;As discussed previously, we can only know the optimal Samuelson share in hindsight. When looking forward, we can only try our best to estimate future expected return and standard deviation. What happens if we get it wrong?&lt;/p&gt;

&lt;p&gt;We can learn the impact of using the wrong amount of leverage by looking at how historical portfolios would have performed with too little or too much leverage. For this analysis, I calculate optimal leverage for each portfolio, then find what would have happened with 50 percentage points too little or too much leverage. That is, if optimal leverage was (say) 200%, what would have happened if we had used 150% or 250% instead?&lt;/p&gt;

&lt;p&gt;In most cases, using too much or too little leverage had a symmetric effect on (geometric) return. For example, for US stocks 1927-2018, optimal leverage equaled 1.91, which gave an 11.8% return. Using 1.41 or 2.41 leverage resulted in a 11.2% return, with standard deviations of 26% and 45%, respectively. So someone with logarithmic utility is indifferent between these options, and someone with sub-logarithmic utility much prefers the option with too little leverage because it provides equal return with far less risk.&lt;/p&gt;

&lt;p&gt;In some cases, the over-leveraged portfolio would have gone bust, which, according to a logarithmic utility function, produces \(-\infty\) utility. Thus, all else equal, even a logarithmic utility function would prefer to undershoot optimal leverage than to overshoot it&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;h3 id=&quot;on-risk-aversion&quot;&gt;On risk aversion&lt;/h3&gt;

&lt;p&gt;For US equities 1927-2018, the over-leveraged and under-leveraged portfolios both ended up at the same place, but they took different routes to get there:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/leverage-over-under-US-stocks.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This graph uses a log scale, so a price drop from one horizontal line to the one below represents a 90% drawdown. As we can see, the over-leveraged portfolio experienced multiple 80%+ drawdowns, and went through an extraordinarily painful 99.6% drawdown during the Great Depression. The over-leveraged portfolio underperformed the under-leveraged one throughout almost the entire period, and only made up the difference right at the end of the sample, (as well as surpassing the under-leveraged portfolio for exactly one month at the peak of the tech bubble).&lt;/p&gt;

&lt;p&gt;An investor who truly has logarithmic utility doesn’t care about drawdowns. Individuals in practice almost certainly don’t experience logarithmic utility of money—in this simulation, almost every real-world investor would strongly prefer the 1.41-leveraged portfolio over 2.41. But perhaps altruists should bite the bullet on this and accept that the two strategies are equally good for a philanthropically-directed basket of money. I am not convinced that altruists should in fact use a logarithmic utility function, but I find it plausible at least. We have some theoretical reasons to prefer logarithmic utility; for more on this, see Owen Cotton-Barratt’s &lt;a href=&quot;http://www.fhi.ox.ac.uk/law-of-logarithmic-returns/&quot;&gt;The Law of Logarithmic Returns&lt;/a&gt;. I provide some open-ended (and inconclusive) discussion of this question in &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#risk-aversion-for-altruistic-causes&quot;&gt;a previous essay, under “Risk aversion for altruistic causes”&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But &lt;a href=&quot;https://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/#uncorrelated-small-donors-are-nearly-risk-neutral&quot;&gt;uncorrelated donors are nearly risk-neutral&lt;/a&gt;, which suggests such philanthropists (if they exist in practice) might prefer to take on even more leverage, decreasing geometric returns but increasing expected arithmetic returns. Such an investor would knowingly accept a high probability of getting worse performance than the logarithmic-utility investor, in the hope of getting lucky with an unlikely but extremely valuable (to them) market upswing. That sort of strategy would be crazy for most investors, but it might make sense for altruistic portfolios.&lt;/p&gt;

&lt;h3 id=&quot;some-sample-portfolios-1&quot;&gt;Some sample portfolios&lt;/h3&gt;

&lt;p&gt;Below, I present summary statistics for various portfolios with empirically optimal (“target”) leverage and with 50 percentage points under or over the target. All portfolios assume leverage costs RF + 1%. Observe that exceeding the leverage target for the value/momentum portfolio results in bankruptcy.&lt;/p&gt;

&lt;p&gt;In addition to the summary statistics presented previously, these tables include the &lt;a href=&quot;http://www.tangotools.com/ui/ui.htm&quot;&gt;Ulcer Index&lt;/a&gt;, a risk indicator invented by Peter G. Martin that measures the depth and severity of drawdowns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 7:&lt;/strong&gt; US stocks, 1927-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Under&lt;/td&gt;
      &lt;td&gt;1.41&lt;/td&gt;
      &lt;td&gt;0.29&lt;/td&gt;
      &lt;td&gt;11.2&lt;/td&gt;
      &lt;td&gt;26.0&lt;/td&gt;
      &lt;td&gt;32.8&lt;/td&gt;
      &lt;td&gt;94%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Target&lt;/td&gt;
      &lt;td&gt;1.91&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;11.8&lt;/td&gt;
      &lt;td&gt;35.2&lt;/td&gt;
      &lt;td&gt;45.7&lt;/td&gt;
      &lt;td&gt;98%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Over&lt;/td&gt;
      &lt;td&gt;2.41&lt;/td&gt;
      &lt;td&gt;0.17&lt;/td&gt;
      &lt;td&gt;11.2&lt;/td&gt;
      &lt;td&gt;44.5&lt;/td&gt;
      &lt;td&gt;57.2&lt;/td&gt;
      &lt;td&gt;99.6%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Imagine an alternative world where we choose our desired leverage based on the sample period 1952-2018, and then the Great Depression begins in 2029 instead of 1929. Using the methodology above, we choose 2.75 as the point of empirically optimal leverage. Then when the Great Depression hits, the portfolio experiences a 99.9% drawdown during the Depression, but luckily does not go bankrupt. It does not go bust unless we use 3.43:1 leverage or more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 8:&lt;/strong&gt; 60% US stocks, 40% US bonds, 1947-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Under&lt;/td&gt;
      &lt;td&gt;3.99&lt;/td&gt;
      &lt;td&gt;0.30&lt;/td&gt;
      &lt;td&gt;15.8&lt;/td&gt;
      &lt;td&gt;37.4&lt;/td&gt;
      &lt;td&gt;38.6&lt;/td&gt;
      &lt;td&gt;91%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Target&lt;/td&gt;
      &lt;td&gt;4.49&lt;/td&gt;
      &lt;td&gt;0.27&lt;/td&gt;
      &lt;td&gt;16.0&lt;/td&gt;
      &lt;td&gt;42.1&lt;/td&gt;
      &lt;td&gt;44.2&lt;/td&gt;
      &lt;td&gt;94%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Over&lt;/td&gt;
      &lt;td&gt;4.99&lt;/td&gt;
      &lt;td&gt;0.24&lt;/td&gt;
      &lt;td&gt;15.8&lt;/td&gt;
      &lt;td&gt;46.8&lt;/td&gt;
      &lt;td&gt;49.7&lt;/td&gt;
      &lt;td&gt;96%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;strong&gt;Table 9:&lt;/strong&gt; US val/mom, 1952-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Under&lt;/td&gt;
      &lt;td&gt;2.42&lt;/td&gt;
      &lt;td&gt;0.61&lt;/td&gt;
      &lt;td&gt;34.5&lt;/td&gt;
      &lt;td&gt;47.8&lt;/td&gt;
      &lt;td&gt;35.1&lt;/td&gt;
      &lt;td&gt;92%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Target&lt;/td&gt;
      &lt;td&gt;2.92&lt;/td&gt;
      &lt;td&gt;0.53&lt;/td&gt;
      &lt;td&gt;36.1&lt;/td&gt;
      &lt;td&gt;57.7&lt;/td&gt;
      &lt;td&gt;44.4&lt;/td&gt;
      &lt;td&gt;96%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Over&lt;/td&gt;
      &lt;td&gt;3.42&lt;/td&gt;
      &lt;td&gt;-1.48&lt;/td&gt;
      &lt;td&gt;-100.0&lt;/td&gt;
      &lt;td&gt;67.6&lt;/td&gt;
      &lt;td&gt;486.5&lt;/td&gt;
      &lt;td&gt;100%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;See &lt;a href=&quot;#appendix-b-more-over--or-under-leveraged-sample-portfolios&quot;&gt;Appendix B&lt;/a&gt; for additional tables.&lt;/p&gt;

&lt;h2 id=&quot;rebalance-timing-luck&quot;&gt;Rebalance timing luck&lt;/h2&gt;

&lt;p&gt;Optimal leverage is vulnerable to &lt;a href=&quot;https://blog.thinknewfound.com/2018/01/quantifying-timing-luck/&quot;&gt;rebalance timing luck&lt;/a&gt;&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;. The performance of a leveraged portfolio might change dramatically if one starts investing a few days earlier or later.&lt;/p&gt;

&lt;p&gt;We would like to know how a historical portfolio would change if we put the rebalance date on different days of the month. To test this, I constructed 22 portfolios, each of which rebalances once every 22 trading days, one portfolio for each possible start day. (I use the number 22 because there are approximately 22 trading days per month.) I constructed these portfolios over US equities 1927-2019.&lt;/p&gt;

&lt;p&gt;The median portfolio had an empirically optimal leverage of 1.83, but the portfolios ranged from a minimum of 1.63 to a maximum of 1.97.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table 10:&lt;/strong&gt; Min/median/max values of summary statistics for the 22 portfolios&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sam&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Min&lt;/td&gt;
      &lt;td&gt;1.63&lt;/td&gt;
      &lt;td&gt;1.38&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;11.0&lt;/td&gt;
      &lt;td&gt;32.0&lt;/td&gt;
      &lt;td&gt;41.3&lt;/td&gt;
      &lt;td&gt;97%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Median&lt;/td&gt;
      &lt;td&gt;1.83&lt;/td&gt;
      &lt;td&gt;1.55&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;11.2&lt;/td&gt;
      &lt;td&gt;33.4&lt;/td&gt;
      &lt;td&gt;44.7&lt;/td&gt;
      &lt;td&gt;98%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Max&lt;/td&gt;
      &lt;td&gt;1.97&lt;/td&gt;
      &lt;td&gt;1.62&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;11.7&lt;/td&gt;
      &lt;td&gt;35.5&lt;/td&gt;
      &lt;td&gt;46.8&lt;/td&gt;
      &lt;td&gt;98%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;I also tested how the portfolios would have performed if we had gotten “maximally unlucky” and used the highest leverage of 1.97 across all portfolios. Some would have experienced &amp;gt;99% drawdowns, but none would have gone bust.&lt;/p&gt;

&lt;p&gt;This shows that even choosing a slightly different rebalance date can non-trivially change the outcome of a portfolio.&lt;/p&gt;

&lt;h2 id=&quot;path-dependence&quot;&gt;Path dependence&lt;/h2&gt;

&lt;p&gt;In theory, markets follow a random walk: each period’s return is independent of the last. In practice, bad months tend to cluster together, as they did in 1929, 2000, or 2008.&lt;/p&gt;

&lt;p&gt;For investors with logarithmic utility, this doesn’t matter. Investors with sub-logarithmic utility prefer bad months to distribute evenly.&lt;/p&gt;

&lt;p&gt;We can see the effect of this by taking monthly returns of simulated portfolios and shuffling their order. The shuffled portfolio has all the same &lt;a href=&quot;https://en.wikipedia.org/wiki/Moment_(mathematics)&quot;&gt;moments&lt;/a&gt; as the original, but sub-logarithmic utility functions prefer most shuffled portfolios over the original.&lt;/p&gt;

&lt;p&gt;Unlike logarithmic utility, someone with sub-logarithmic utility is not indifferent to the sequencing of returns, and we cannot simply look at the total return of a portfolio. One way to express to what extent someone prefers a particular portfolio is to calculate the return of all rolling 5-year periods (or 10-year, or 2-year, or whatever) and find the average utility over all rolling returns, using the &lt;a href=&quot;https://en.wikipedia.org/wiki/Isoelastic_utility&quot;&gt;isoelastic utility function&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If we do this, we find that portfolios where the returns are randomly re-ordered usually (but not always) produced higher expected utility than the original portfolio. (Results not shown.) This means people with sub-logarithmic utility dislike actual market returns more strongly than the return, standard deviation, skewness, or any other statistical moment might suggest—which means they want to take less leverage than the Samuelson share.&lt;/p&gt;

&lt;p&gt;This subject deserves much more detail, but as this essay primarily focuses on logarithmic utility, I will leave it for another time.&lt;/p&gt;

&lt;h2 id=&quot;how-does-ex-ante-optimal-leverage-compare-to-ex-post&quot;&gt;How does ex-ante optimal leverage compare to ex-post?&lt;/h2&gt;

&lt;p&gt;To determine optimal leverage for each portfolio, we looked at the portfolio’s performance at the end of the period and maximized geometric return. But we can’t do that in real life because we don’t know what future returns will look like.&lt;/p&gt;

&lt;p&gt;What happens if we calculate optimal leverage &lt;em&gt;ex-ante&lt;/em&gt; instead of &lt;em&gt;ex-post&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;One naive way to decide how much leverage to use: At each point in time, calculate what the empirically optimal leverage would have been over all previous periods, and use that amount of leverage for the following period. We can do better than this in practice, but let’s start with this method and see what happens.&lt;/p&gt;

&lt;p&gt;This method requires limiting our time range—we have to spend some number of years gathering data so we can get an initial estimate of the expected return/standard deviation. For this backtest, I chose 20 years as the waiting period. Because this reduces the sample period by 20 years, it makes the backtest more sensitive to the specific conditions of the tested time range.&lt;/p&gt;

&lt;h3 id=&quot;some-sample-portfolios-2&quot;&gt;Some sample portfolios&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Table 11:&lt;/strong&gt; Ex-ante and ex-post leverage for US stocks, 1947-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Un-leveraged&lt;/td&gt;
      &lt;td&gt;0.45&lt;/td&gt;
      &lt;td&gt;10.8&lt;/td&gt;
      &lt;td&gt;14.6&lt;/td&gt;
      &lt;td&gt;12.1&lt;/td&gt;
      &lt;td&gt;50%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-ante&lt;/td&gt;
      &lt;td&gt;0.46&lt;/td&gt;
      &lt;td&gt;14.1&lt;/td&gt;
      &lt;td&gt;21.2&lt;/td&gt;
      &lt;td&gt;17.4&lt;/td&gt;
      &lt;td&gt;61%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-post&lt;/td&gt;
      &lt;td&gt;0.27&lt;/td&gt;
      &lt;td&gt;15.1&lt;/td&gt;
      &lt;td&gt;39.6&lt;/td&gt;
      &lt;td&gt;42.5&lt;/td&gt;
      &lt;td&gt;93%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;strong&gt;Table 12:&lt;/strong&gt; Ex-ante and ex-post leverage for 60% US stocks, 40% bonds, 1967-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Un-leveraged&lt;/td&gt;
      &lt;td&gt;0.43&lt;/td&gt;
      &lt;td&gt;9.3&lt;/td&gt;
      &lt;td&gt;10.1&lt;/td&gt;
      &lt;td&gt;6.6&lt;/td&gt;
      &lt;td&gt;30%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-ante&lt;/td&gt;
      &lt;td&gt;0.28&lt;/td&gt;
      &lt;td&gt;15.9&lt;/td&gt;
      &lt;td&gt;38.1&lt;/td&gt;
      &lt;td&gt;39.1&lt;/td&gt;
      &lt;td&gt;83%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-post&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;13.2&lt;/td&gt;
      &lt;td&gt;35.6&lt;/td&gt;
      &lt;td&gt;37.2&lt;/td&gt;
      &lt;td&gt;86%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;&lt;strong&gt;Table 13:&lt;/strong&gt; Ex-ante and ex-post leverage for 60% US value/momentum, 40% bonds, 1972-2018&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Un-leveraged&lt;/td&gt;
      &lt;td&gt;0.74&lt;/td&gt;
      &lt;td&gt;14.4&lt;/td&gt;
      &lt;td&gt;12.6&lt;/td&gt;
      &lt;td&gt;6.5&lt;/td&gt;
      &lt;td&gt;33%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-ante&lt;/td&gt;
      &lt;td&gt;0.61&lt;/td&gt;
      &lt;td&gt;37.4&lt;/td&gt;
      &lt;td&gt;51.7&lt;/td&gt;
      &lt;td&gt;26.8&lt;/td&gt;
      &lt;td&gt;81%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-post&lt;/td&gt;
      &lt;td&gt;0.44&lt;/td&gt;
      &lt;td&gt;31.1&lt;/td&gt;
      &lt;td&gt;56.8&lt;/td&gt;
      &lt;td&gt;37.2&lt;/td&gt;
      &lt;td&gt;92%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;See &lt;a href=&quot;#appendix-c-more-ex-ante-optimal-leverage-portfolios&quot;&gt;Appendix C&lt;/a&gt; for additional sample portfolios.&lt;/p&gt;

&lt;h3 id=&quot;commentary-1&quot;&gt;Commentary&lt;/h3&gt;

&lt;p&gt;None of these portfolio constructions show a large difference between the CAGRs of the ex-ante and ex-post portfolios. In fact, the ex-ante portfolios have a higher CAGR in two out of three samples, and a higher Sharpe ratio in all three. I was actually surprised by this result—I don’t know how to explain this and I’m inclined to attribute it to coincidence. Importantly, this suggests that choosing Samuelson share ex-ante by naively extrapolating past returns might actually not work too badly.&lt;/p&gt;

&lt;p&gt;We can probably do better than simply extrapolating past return and standard deviation. Colby Davis’ &lt;a href=&quot;https://rhsfinancial.com/2017/06/20/line-aggressive-crazy-leverage/&quot;&gt;The Line Between Aggressive and Crazy&lt;/a&gt; proposes a methodology for estimating forward-looking return and standard deviation:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Estimate equity returns using earnings yield.&lt;/li&gt;
  &lt;li&gt;Estimate bond returns using the yield to maturity.&lt;/li&gt;
  &lt;li&gt;Estimate volatility as the trailing 60-day standard deviation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;His backtest covers a shorter time period (1993-2017), but he does find that over the sample, this methodology substantially outperformed the strategy that uses constant leverage equal to the ex-post optimal Samuelson share&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;. Davis writes: “This is because by estimating volatility in real-time, the forward-looking […] investor is able to mitigate some of the worst drawdowns that the constant-leverage investor dealt with.”&lt;/p&gt;

&lt;p&gt;This methodology does require more rebalancing because it changes the leverage target every time volatility changes, which will incur greater transaction costs. This does not matter much for a portfolio that invests in highly liquid index funds, but it may pose issues for the concentrated value/momentum and TSMOM portfolios (or other concentrated portfolios).&lt;/p&gt;

&lt;h1 id=&quot;conclusions&quot;&gt;Conclusions&lt;/h1&gt;

&lt;p&gt;According to the standard theoretical model of asset returns, the Samuelson share tells us how much leverage to use. We began by considering some caveats that might make portfolio behavior in practice look worse than the theoretical model suggests:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Mean reversion&lt;/li&gt;
  &lt;li&gt;Left skew of investment returns&lt;/li&gt;
  &lt;li&gt;Unpredictability of future returns&lt;/li&gt;
  &lt;li&gt;Cost of leverage&lt;/li&gt;
  &lt;li&gt;Transaction costs&lt;/li&gt;
  &lt;li&gt;Taxes&lt;/li&gt;
  &lt;li&gt;Portfolios can’t be adjusted continuously&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We looked at some simulated historical portfolios and saw that the Samuelson share usually did not differ much from empirically optimal leverage, with or without accounting for the cost of leverage and transaction costs. This suggests that, for the tested portfolios, mean reversion, left skew, and non-continuous rebalancing did not much affect optimal leverage. However, we saw that using too much leverage can potentially result in bankruptcy, which suggests we should use caution and probably should prefer to under-estimate leverage rather than over-estimate it.&lt;/p&gt;

&lt;p&gt;Perhaps the most important caveat to the theoretical model is the unpredictability of future return. We attempted to get a sense of how this matters by looking at what would have happened in historical portfolios if we had used too little or too much leverage. We saw that the “under” and “over” portfolios usually had the same geometric return, but that the “over” portfolio sometimes lost all of its money. This result again suggests that an investor who wishes to maximize geometric return should still prefer not to use enough leverage than to use too much.&lt;/p&gt;

&lt;p&gt;The portfolio simulations explicitly accounted for costs of leverage and transaction costs. According to these simulations, these costs do reduce optimal leverage, but this can be accounted for by subtracting costs from the expected return when calculating the Samuelson share. Of course, this requires that we accurately estimate costs, which we cannot do perfectly. But we can estimate costs more reliably than future portfolio return and volatility.&lt;/p&gt;

&lt;p&gt;The final remaining caveat, taxes, was not considered. This caveat does not apply to all investors. Foundations do not pay taxes on capital gains, and individuals can invest tax-free using a donor-advised fund. Investors in the United States can avoid capital gains taxes on their non-charitable funds with a 401(k) or an IRA.&lt;/p&gt;

&lt;p&gt;For investors who must pay capital gains tax but can also deduct capital losses, taxes reduce both expected return and standard deviation by the same amount. Thus, investors can calculate the tax-adjusted Samuelson share by reducing return \(\mu\) and standard deviation \(\sigma\) according to the tax rate, while keeping the cost of leverage \(R\) unchanged. This will decrease the Samuelson share somewhat, and reduce the return of the leveraged portfolio by somewhat more. Taxes do not change the difference between the Samuelson share and optimal leverage because they scale the returns of the portfolio without changing the shape of the distribution.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.lesswrong.com/posts/h24JGbmweNpWZfBkM/markets-are-anti-inductive&quot;&gt;Markets are anti-inductive&lt;/a&gt;. Just because they behaved a certain way in the past does not mean they will continue to behave that way. So all this analysis on historical data might not apply to markets in the future. That said, the high-level insights drawn did not depend on any assumptions that contradict the efficient market hypothesis, so I have no particular reason not to expect the insights to continue to apply. Rather, I would simply be cautious about making overly strong assumptions about future market behavior. We can draw some interesting conclusions, but hold them weakly.&lt;/p&gt;

&lt;p&gt;All things considered, halving the Samuelson share—the plan I discussed previously—appears too conservative. But simply using the Samuelson share unaltered is probably too aggressive. According to the parameters discussed in this essay (e.g., logarithmic utility), forward-looking optimal leverage appears to lie somewhere in between 0.5x and 1x the Samuelson share.&lt;/p&gt;

&lt;p&gt;Thanks to Kit Harris for providing feedback on this essay.&lt;/p&gt;

&lt;h1 id=&quot;appendix&quot;&gt;Appendix&lt;/h1&gt;

&lt;h2 id=&quot;appendix-a-more-sample-portfolios&quot;&gt;Appendix A: More sample portfolios&lt;/h2&gt;

&lt;p&gt;Just within the constraints of my backtest, we can vary portfolios based on which asset classes we include, the proportions of asset classes within a portfolio, fees, and costs of leverage. In the main essay, I only presented a few of the most significant variations. This section provides some hypothetical results for a wider variety of portfolios.&lt;/p&gt;

&lt;p&gt;In addition to the summary statistics presented previously, these tables include the &lt;a href=&quot;http://www.tangotools.com/ui/ui.htm&quot;&gt;Ulcer Index&lt;/a&gt;, a risk indicator invented by Peter G. Martin that measures the depth and severity of drawdowns.&lt;/p&gt;

&lt;p&gt;All of these backtests assume no fees
.&lt;/p&gt;

&lt;p&gt;(I realize these tables are going past the margins on the page, but there’s not much I can do about it without substantially redesigning my website’s CSS. Apologies to mobile viewers.)&lt;/p&gt;

&lt;p&gt;1952-2018, including US stocks, US value/momentum, bonds, and commodities. Cost of leverage equals RF.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sam&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
      &lt;th&gt;Lev-Sam&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;2.98&lt;/td&gt;
      &lt;td&gt;3.03&lt;/td&gt;
      &lt;td&gt;0.28&lt;/td&gt;
      &lt;td&gt;17.2&lt;/td&gt;
      &lt;td&gt;43.7&lt;/td&gt;
      &lt;td&gt;46.0&lt;/td&gt;
      &lt;td&gt;95%&lt;/td&gt;
      &lt;td&gt;-0.05&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;bonds&lt;/td&gt;
      &lt;td&gt;3.21&lt;/td&gt;
      &lt;td&gt;2.64&lt;/td&gt;
      &lt;td&gt;0.12&lt;/td&gt;
      &lt;td&gt;7.1&lt;/td&gt;
      &lt;td&gt;23.6&lt;/td&gt;
      &lt;td&gt;20.8&lt;/td&gt;
      &lt;td&gt;75%&lt;/td&gt;
      &lt;td&gt;0.57&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commodities&lt;/td&gt;
      &lt;td&gt;2.43&lt;/td&gt;
      &lt;td&gt;1.97&lt;/td&gt;
      &lt;td&gt;0.17&lt;/td&gt;
      &lt;td&gt;10.0&lt;/td&gt;
      &lt;td&gt;32.5&lt;/td&gt;
      &lt;td&gt;40.2&lt;/td&gt;
      &lt;td&gt;90%&lt;/td&gt;
      &lt;td&gt;0.46&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US val/mom&lt;/td&gt;
      &lt;td&gt;3.00&lt;/td&gt;
      &lt;td&gt;3.94&lt;/td&gt;
      &lt;td&gt;0.56&lt;/td&gt;
      &lt;td&gt;38.7&lt;/td&gt;
      &lt;td&gt;59.3&lt;/td&gt;
      &lt;td&gt;44.8&lt;/td&gt;
      &lt;td&gt;96%&lt;/td&gt;
      &lt;td&gt;-0.94&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;60% US stock, 40% bond&lt;/td&gt;
      &lt;td&gt;5.17&lt;/td&gt;
      &lt;td&gt;5.36&lt;/td&gt;
      &lt;td&gt;0.30&lt;/td&gt;
      &lt;td&gt;19.7&lt;/td&gt;
      &lt;td&gt;49.0&lt;/td&gt;
      &lt;td&gt;47.9&lt;/td&gt;
      &lt;td&gt;96%&lt;/td&gt;
      &lt;td&gt;-0.19&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;60% US val/mom, 40% bond&lt;/td&gt;
      &lt;td&gt;5.32&lt;/td&gt;
      &lt;td&gt;6.99&lt;/td&gt;
      &lt;td&gt;0.58&lt;/td&gt;
      &lt;td&gt;43.4&lt;/td&gt;
      &lt;td&gt;64.4&lt;/td&gt;
      &lt;td&gt;45.5&lt;/td&gt;
      &lt;td&gt;98%&lt;/td&gt;
      &lt;td&gt;-1.67&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% US stock, 40% bond, 20% commodities&lt;/td&gt;
      &lt;td&gt;6.85&lt;/td&gt;
      &lt;td&gt;7.94&lt;/td&gt;
      &lt;td&gt;0.36&lt;/td&gt;
      &lt;td&gt;23.4&lt;/td&gt;
      &lt;td&gt;50.7&lt;/td&gt;
      &lt;td&gt;36.2&lt;/td&gt;
      &lt;td&gt;96%&lt;/td&gt;
      &lt;td&gt;-1.09&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;40% US val/mom, 40% bond, 20% commodities&lt;/td&gt;
      &lt;td&gt;6.71&lt;/td&gt;
      &lt;td&gt;9.70&lt;/td&gt;
      &lt;td&gt;0.63&lt;/td&gt;
      &lt;td&gt;44.2&lt;/td&gt;
      &lt;td&gt;60.8&lt;/td&gt;
      &lt;td&gt;38.9&lt;/td&gt;
      &lt;td&gt;99%&lt;/td&gt;
      &lt;td&gt;-2.99&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;1985-2018, including US stocks, US value/momentum, and TSMOM. Cost of leverage equals RF.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sam&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
      &lt;th&gt;Lev-Sam&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;2.99&lt;/td&gt;
      &lt;td&gt;3.37&lt;/td&gt;
      &lt;td&gt;0.33&lt;/td&gt;
      &lt;td&gt;19.0&lt;/td&gt;
      &lt;td&gt;45.0&lt;/td&gt;
      &lt;td&gt;48.2&lt;/td&gt;
      &lt;td&gt;95%&lt;/td&gt;
      &lt;td&gt;-0.38&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;US val/mom&lt;/td&gt;
      &lt;td&gt;2.72&lt;/td&gt;
      &lt;td&gt;3.50&lt;/td&gt;
      &lt;td&gt;0.49&lt;/td&gt;
      &lt;td&gt;30.3&lt;/td&gt;
      &lt;td&gt;53.6&lt;/td&gt;
      &lt;td&gt;39.9&lt;/td&gt;
      &lt;td&gt;94%&lt;/td&gt;
      &lt;td&gt;-0.78&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;TSMOM&lt;/td&gt;
      &lt;td&gt;8.10&lt;/td&gt;
      &lt;td&gt;9.58&lt;/td&gt;
      &lt;td&gt;0.90&lt;/td&gt;
      &lt;td&gt;95.4&lt;/td&gt;
      &lt;td&gt;98.9&lt;/td&gt;
      &lt;td&gt;49.8&lt;/td&gt;
      &lt;td&gt;91%&lt;/td&gt;
      &lt;td&gt;-1.48&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;50% US val/mom, 50% TSMOM&lt;/td&gt;
      &lt;td&gt;4.69&lt;/td&gt;
      &lt;td&gt;10.90&lt;/td&gt;
      &lt;td&gt;1.13&lt;/td&gt;
      &lt;td&gt;67.1&lt;/td&gt;
      &lt;td&gt;54.5&lt;/td&gt;
      &lt;td&gt;35.7&lt;/td&gt;
      &lt;td&gt;93%&lt;/td&gt;
      &lt;td&gt;-6.21&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;1990-2018, including US stocks and international stocks. Cost of leverage equals RF + 1%.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sam&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;2.94&lt;/td&gt;
      &lt;td&gt;2.94&lt;/td&gt;
      &lt;td&gt;0.29&lt;/td&gt;
      &lt;td&gt;15.6&lt;/td&gt;
      &lt;td&gt;42.7&lt;/td&gt;
      &lt;td&gt;50.8&lt;/td&gt;
      &lt;td&gt;95%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;int’l stocks&lt;/td&gt;
      &lt;td&gt;1.15&lt;/td&gt;
      &lt;td&gt;0.70&lt;/td&gt;
      &lt;td&gt;0.15&lt;/td&gt;
      &lt;td&gt;5.6&lt;/td&gt;
      &lt;td&gt;18.9&lt;/td&gt;
      &lt;td&gt;21.3&lt;/td&gt;
      &lt;td&gt;62%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;50% US stocks, 50% int’l stocks&lt;/td&gt;
      &lt;td&gt;2.24&lt;/td&gt;
      &lt;td&gt;2.01&lt;/td&gt;
      &lt;td&gt;0.22&lt;/td&gt;
      &lt;td&gt;10.0&lt;/td&gt;
      &lt;td&gt;32.3&lt;/td&gt;
      &lt;td&gt;36.4&lt;/td&gt;
      &lt;td&gt;85%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;1927-2018, including US stocks and commodities. Cost of leverage equals RF + 1%.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sam&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
      &lt;th&gt;Lev-Sam&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;1.91&lt;/td&gt;
      &lt;td&gt;1.60&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;11.8&lt;/td&gt;
      &lt;td&gt;35.2&lt;/td&gt;
      &lt;td&gt;45.7&lt;/td&gt;
      &lt;td&gt;98%&lt;/td&gt;
      &lt;td&gt;0.31&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commodities&lt;/td&gt;
      &lt;td&gt;1.53&lt;/td&gt;
      &lt;td&gt;1.03&lt;/td&gt;
      &lt;td&gt;0.16&lt;/td&gt;
      &lt;td&gt;7.5&lt;/td&gt;
      &lt;td&gt;24.5&lt;/td&gt;
      &lt;td&gt;37.9&lt;/td&gt;
      &lt;td&gt;93%&lt;/td&gt;
      &lt;td&gt;0.5&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;66% US stock, 33% commodities&lt;/td&gt;
      &lt;td&gt;2.52&lt;/td&gt;
      &lt;td&gt;2.30&lt;/td&gt;
      &lt;td&gt;0.25&lt;/td&gt;
      &lt;td&gt;12.8&lt;/td&gt;
      &lt;td&gt;37.1&lt;/td&gt;
      &lt;td&gt;45.6&lt;/td&gt;
      &lt;td&gt;99%&lt;/td&gt;
      &lt;td&gt;0.22&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;1927-2018, including US stocks and commodities. Cost of leverage equals RF.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sam&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
      &lt;th&gt;Lev-Sam&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;US stocks&lt;/td&gt;
      &lt;td&gt;2.14&lt;/td&gt;
      &lt;td&gt;1.89&lt;/td&gt;
      &lt;td&gt;0.24&lt;/td&gt;
      &lt;td&gt;13.0&lt;/td&gt;
      &lt;td&gt;39.5&lt;/td&gt;
      &lt;td&gt;49.3&lt;/td&gt;
      &lt;td&gt;99%&lt;/td&gt;
      &lt;td&gt;0.25&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;commodities&lt;/td&gt;
      &lt;td&gt;1.93&lt;/td&gt;
      &lt;td&gt;1.42&lt;/td&gt;
      &lt;td&gt;0.15&lt;/td&gt;
      &lt;td&gt;8.3&lt;/td&gt;
      &lt;td&gt;30.9&lt;/td&gt;
      &lt;td&gt;45.0&lt;/td&gt;
      &lt;td&gt;97%&lt;/td&gt;
      &lt;td&gt;0.51&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;66% US stock, 33% commodities&lt;/td&gt;
      &lt;td&gt;2.86&lt;/td&gt;
      &lt;td&gt;2.76&lt;/td&gt;
      &lt;td&gt;0.26&lt;/td&gt;
      &lt;td&gt;14.7&lt;/td&gt;
      &lt;td&gt;42.2&lt;/td&gt;
      &lt;td&gt;48.5&lt;/td&gt;
      &lt;td&gt;100%&lt;/td&gt;
      &lt;td&gt;0.1&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id=&quot;appendix-b-more-over--or-under-leveraged-sample-portfolios&quot;&gt;Appendix B: More over- or under-leveraged sample portfolios&lt;/h2&gt;

&lt;p&gt;Commodities, 1927-2018.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Under&lt;/td&gt;
      &lt;td&gt;1.03&lt;/td&gt;
      &lt;td&gt;0.22&lt;/td&gt;
      &lt;td&gt;7.1&lt;/td&gt;
      &lt;td&gt;16.5&lt;/td&gt;
      &lt;td&gt;25.5&lt;/td&gt;
      &lt;td&gt;80%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Target&lt;/td&gt;
      &lt;td&gt;1.53&lt;/td&gt;
      &lt;td&gt;0.16&lt;/td&gt;
      &lt;td&gt;7.5&lt;/td&gt;
      &lt;td&gt;24.5&lt;/td&gt;
      &lt;td&gt;37.9&lt;/td&gt;
      &lt;td&gt;93%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Over&lt;/td&gt;
      &lt;td&gt;2.03&lt;/td&gt;
      &lt;td&gt;0.11&lt;/td&gt;
      &lt;td&gt;7.1&lt;/td&gt;
      &lt;td&gt;32.5&lt;/td&gt;
      &lt;td&gt;48.2&lt;/td&gt;
      &lt;td&gt;98%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;50% US value/momentum, 50% TSMOM, 1985-2018.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Lev&lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Under&lt;/td&gt;
      &lt;td&gt;3.83&lt;/td&gt;
      &lt;td&gt;0.67&lt;/td&gt;
      &lt;td&gt;34.0&lt;/td&gt;
      &lt;td&gt;44.5&lt;/td&gt;
      &lt;td&gt;32.3&lt;/td&gt;
      &lt;td&gt;78%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Target&lt;/td&gt;
      &lt;td&gt;4.33&lt;/td&gt;
      &lt;td&gt;0.61&lt;/td&gt;
      &lt;td&gt;35.0&lt;/td&gt;
      &lt;td&gt;50.3&lt;/td&gt;
      &lt;td&gt;39.5&lt;/td&gt;
      &lt;td&gt;88%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Over&lt;/td&gt;
      &lt;td&gt;4.83&lt;/td&gt;
      &lt;td&gt;0.47&lt;/td&gt;
      &lt;td&gt;31.4&lt;/td&gt;
      &lt;td&gt;56.1&lt;/td&gt;
      &lt;td&gt;50.2&lt;/td&gt;
      &lt;td&gt;98%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;h2 id=&quot;appendix-c-more-ex-ante-optimal-leverage-portfolios&quot;&gt;Appendix C: More ex-ante optimal leverage portfolios&lt;/h2&gt;

&lt;p&gt;All portfolios wait for 20 years before investing, and then estimate optimal leverage at each period by calculating the empirical optimal leverage over all prior periods.&lt;/p&gt;

&lt;p&gt;Commodities, 1947-2018.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Un-leveraged&lt;/td&gt;
      &lt;td&gt;0.31&lt;/td&gt;
      &lt;td&gt;8.5&lt;/td&gt;
      &lt;td&gt;14.0&lt;/td&gt;
      &lt;td&gt;15.5&lt;/td&gt;
      &lt;td&gt;51%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-ante&lt;/td&gt;
      &lt;td&gt;0.23&lt;/td&gt;
      &lt;td&gt;8.6&lt;/td&gt;
      &lt;td&gt;19.4&lt;/td&gt;
      &lt;td&gt;19.9&lt;/td&gt;
      &lt;td&gt;55%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-post&lt;/td&gt;
      &lt;td&gt;0.19&lt;/td&gt;
      &lt;td&gt;10.0&lt;/td&gt;
      &lt;td&gt;30.5&lt;/td&gt;
      &lt;td&gt;37.5&lt;/td&gt;
      &lt;td&gt;87%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;US value/momentum, 1972-2018.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt; &lt;/th&gt;
      &lt;th&gt;Sharpe&lt;/th&gt;
      &lt;th&gt;CAGR&lt;/th&gt;
      &lt;th&gt;Stdev&lt;/th&gt;
      &lt;th&gt;Ulcer&lt;/th&gt;
      &lt;th&gt;MaxDD&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Un-leveraged&lt;/td&gt;
      &lt;td&gt;0.68&lt;/td&gt;
      &lt;td&gt;17.8&lt;/td&gt;
      &lt;td&gt;19.5&lt;/td&gt;
      &lt;td&gt;11.9&lt;/td&gt;
      &lt;td&gt;57%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-ante&lt;/td&gt;
      &lt;td&gt;0.75&lt;/td&gt;
      &lt;td&gt;28.4&lt;/td&gt;
      &lt;td&gt;31.1&lt;/td&gt;
      &lt;td&gt;15.1&lt;/td&gt;
      &lt;td&gt;55%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Ex-post&lt;/td&gt;
      &lt;td&gt;0.46&lt;/td&gt;
      &lt;td&gt;28.6&lt;/td&gt;
      &lt;td&gt;51.8&lt;/td&gt;
      &lt;td&gt;38.9&lt;/td&gt;
      &lt;td&gt;94%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Credit to AQR Capital Management, LLC. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Swinkels (2019). &lt;a href=&quot;https://datarepository.eur.nl/articles/Data_Treasury_Bond_Return_Data_Starting_in_1962/8152748&quot;&gt;Historical Data: International monthly government bond returns.&lt;/a&gt; &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, “managed futures” just refers to a strategy that involves actively managing investments in futures, which could mean almost anything. In practice, “managed futures” typically refers to time-series momentum strategies. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:5:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Frazzini,, Israel, and Moskowitz (2013). &lt;a href=&quot;https://www.aqr.com/Insights/Research/Working-Paper/Trading-Costs-of-Asset-Pricing-Anomalies&quot;&gt;Trading Costs of Asset Pricing Anomalies.&lt;/a&gt; &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I used to manually buy individual stocks according to a quantitative investment strategy similar to the value/momentum strategy tested in this essay. Now I buy ETFs because they are more tax-efficient. An investor who buys individual stocks has to pay capital gains tax every time they rebalance. With an ETF, they (usually) only have to pay capital gains tax when selling the ETF itself. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Hurst, Ooi, and Pedersen (2013). &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Demystifying-Managed-Futures&quot;&gt;Demystifying Managed Futures.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Maybe someone with better knowledge of statistics could figure out how to derive a statistically significant result by breaking up the sample into multiple time periods. I know how to do the basic calculations, but I don’t know what’s considered a “legal move” when significance-testing related data samples. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Technically, a logarithmic utility function would refuse to accept any probability of going bust, no matter how small. In theory, we can use a model with known mean and variance and thus ensure this never happens. In practice, every asset has some small chance of going to zero. This runs into &lt;a href=&quot;https://nickbostrom.com/ethics/infinite.pdf&quot;&gt;unsolved problems on how to handle infinity&lt;/a&gt;. Solving these problems goes far beyond the scope of this essay. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The linked blog post was later adapted as an academic paper: Hoffstein, Sibears, and Faber (2019). &lt;a href=&quot;https://jii.pm-research.com/content/early/2019/04/25/jii.2019.1.070&quot;&gt;Rebalance Timing Luck: The Difference between Hired and Fired.&lt;/a&gt; &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Davis refers to what I call the Samuelson share as the Kelly Capital Growth Investment Criterion, which is really a special case of the Samuelson share where utility is logarithmic. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>How Much Leverage Should Altruists Use?</title>
				<pubDate>Mon, 06 Jan 2020 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/01/06/how_much_leverage_should_altruists_use/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Cross-posted to the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/g4oGNGwAoDwyMAJSB/how-much-leverage-should-altruists-use&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Last updated 2020-05-17.&lt;/em&gt;&lt;/p&gt;

&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;Philanthropic investors probably have greater risk tolerance than self-interested ones. Altruists can use leverage—borrowing money to invest—to increase the expected utility of their portfolios. They may wish to lever their portfolios at much higher ratios than self-interested investors—likely 2:1 to 3:1, and perhaps much higher (practical concerns notwithstanding).&lt;/p&gt;

&lt;p&gt;Unlike normal investors, altruists care about reducing their correlations with other investors, so they should heavily tilt their portfolios toward uncorrelated assets.&lt;/p&gt;

&lt;p&gt;This essay will discuss:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Traditional vs. altruistic investing&lt;/li&gt;
  &lt;li&gt;Basic arguments for using leverage&lt;/li&gt;
  &lt;li&gt;Appropriate levels of risk for altruists&lt;/li&gt;
  &lt;li&gt;The importance of uncorrelated assets, and where investors might be able to find them&lt;/li&gt;
  &lt;li&gt;Potential changes for philanthropic behavior&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: This should not be taken as investment advice. This content is for informational purposes only. Any given portfolio results are hypothetical and do not represent returns achieved by an actual investor.&lt;/em&gt;&lt;/p&gt;

&lt;!-- more --&gt;

&lt;h2 id=&quot;contents&quot;&gt;Contents&lt;/h2&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#contents&quot; id=&quot;markdown-toc-contents&quot;&gt;Contents&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#glossary&quot; id=&quot;markdown-toc-glossary&quot;&gt;Glossary&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#what-makes-altruistic-investing-different-from-traditional-investing&quot; id=&quot;markdown-toc-what-makes-altruistic-investing-different-from-traditional-investing&quot;&gt;What makes altruistic investing different from traditional investing?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#improving-on-conventional-investing-wisdom&quot; id=&quot;markdown-toc-improving-on-conventional-investing-wisdom&quot;&gt;Improving on conventional investing wisdom&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#risk-and-leverage-for-traditional-investors&quot; id=&quot;markdown-toc-risk-and-leverage-for-traditional-investors&quot;&gt;Risk and leverage for traditional investors&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#risk-aversion-for-altruistic-causes&quot; id=&quot;markdown-toc-risk-aversion-for-altruistic-causes&quot;&gt;Risk aversion for altruistic causes&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#global-poverty&quot; id=&quot;markdown-toc-global-poverty&quot;&gt;Global poverty&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#farm-animal-welfare&quot; id=&quot;markdown-toc-farm-animal-welfare&quot;&gt;Farm animal welfare&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#existential-risk&quot; id=&quot;markdown-toc-existential-risk&quot;&gt;Existential risk&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#risk-aversion-for-altruists&quot; id=&quot;markdown-toc-risk-aversion-for-altruists&quot;&gt;Risk aversion for altruists&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#uncorrelated-small-donors-are-nearly-risk-neutral&quot; id=&quot;markdown-toc-uncorrelated-small-donors-are-nearly-risk-neutral&quot;&gt;Uncorrelated small donors are nearly risk-neutral&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#how-risk-averse-are-large-donors&quot; id=&quot;markdown-toc-how-risk-averse-are-large-donors&quot;&gt;How risk-averse are large donors?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#correlated-small-donors-look-like-large-donors&quot; id=&quot;markdown-toc-correlated-small-donors-look-like-large-donors&quot;&gt;Correlated small donors look like large donors&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#time-diversification&quot; id=&quot;markdown-toc-time-diversification&quot;&gt;Time diversification&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#adjusting-for-other-altruists-investing-behavior&quot; id=&quot;markdown-toc-adjusting-for-other-altruists-investing-behavior&quot;&gt;Adjusting for other altruists’ investing behavior&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#what-if-rra-varies-over-time&quot; id=&quot;markdown-toc-what-if-rra-varies-over-time&quot;&gt;What if RRA varies over time?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#donors-can-decrease-correlation&quot; id=&quot;markdown-toc-donors-can-decrease-correlation&quot;&gt;Donors can decrease correlation&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#bonds&quot; id=&quot;markdown-toc-bonds&quot;&gt;Bonds&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#commodities&quot; id=&quot;markdown-toc-commodities&quot;&gt;Commodities&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#managed-futures-trendfollowing&quot; id=&quot;markdown-toc-managed-futures-trendfollowing&quot;&gt;Managed futures (trendfollowing)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#buy-and-hold-longshort-strategies&quot; id=&quot;markdown-toc-buy-and-hold-longshort-strategies&quot;&gt;Buy-and-hold long/short strategies&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#longshort-factor-premia&quot; id=&quot;markdown-toc-longshort-factor-premia&quot;&gt;Long/short factor premia&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#startups&quot; id=&quot;markdown-toc-startups&quot;&gt;Startups&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#mission-hedging&quot; id=&quot;markdown-toc-mission-hedging&quot;&gt;Mission hedging&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#what-if-everyone-did-this&quot; id=&quot;markdown-toc-what-if-everyone-did-this&quot;&gt;What if everyone did this?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#return-expectations&quot; id=&quot;markdown-toc-return-expectations&quot;&gt;Return expectations&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#uncorrelated-returns&quot; id=&quot;markdown-toc-uncorrelated-returns&quot;&gt;Uncorrelated returns&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#long-run-market-return&quot; id=&quot;markdown-toc-long-run-market-return&quot;&gt;Long-run market return&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#caveats&quot; id=&quot;markdown-toc-caveats&quot;&gt;Caveats&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#behavior-of-leveraged-investments-in-practice&quot; id=&quot;markdown-toc-behavior-of-leveraged-investments-in-practice&quot;&gt;Behavior of leveraged investments in practice&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#mean-reversion&quot; id=&quot;markdown-toc-mean-reversion&quot;&gt;Mean reversion&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#left-skew-of-investment-returns&quot; id=&quot;markdown-toc-left-skew-of-investment-returns&quot;&gt;Left skew of investment returns&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#unpredictability-of-future-return&quot; id=&quot;markdown-toc-unpredictability-of-future-return&quot;&gt;Unpredictability of future return&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#cost-of-leverage&quot; id=&quot;markdown-toc-cost-of-leverage&quot;&gt;Cost of leverage&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#taxes&quot; id=&quot;markdown-toc-taxes&quot;&gt;Taxes&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#additional-caveats&quot; id=&quot;markdown-toc-additional-caveats&quot;&gt;Additional caveats&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#how-should-this-change-philanthropists-behavior&quot; id=&quot;markdown-toc-how-should-this-change-philanthropists-behavior&quot;&gt;How should this change philanthropists’ behavior?&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#reasonable-leverage-ratios-under-the-theoretical-model&quot; id=&quot;markdown-toc-reasonable-leverage-ratios-under-the-theoretical-model&quot;&gt;Reasonable leverage ratios (under the theoretical model)&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#deviations-from-the-theoretical-model&quot; id=&quot;markdown-toc-deviations-from-the-theoretical-model&quot;&gt;Deviations from the theoretical model&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#good-ventures--open-philanthropy-project&quot; id=&quot;markdown-toc-good-ventures--open-philanthropy-project&quot;&gt;Good Ventures / Open Philanthropy Project&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#implementation-details&quot; id=&quot;markdown-toc-implementation-details&quot;&gt;Implementation details&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#alternative-investment-strategies&quot; id=&quot;markdown-toc-alternative-investment-strategies&quot;&gt;Alternative investment strategies&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#investor-psychology&quot; id=&quot;markdown-toc-investor-psychology&quot;&gt;Investor psychology&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#large-investors&quot; id=&quot;markdown-toc-large-investors&quot;&gt;Large investors&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#confidences&quot; id=&quot;markdown-toc-confidences&quot;&gt;Confidences&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#on-risk-and-leverage&quot; id=&quot;markdown-toc-on-risk-and-leverage&quot;&gt;On risk and leverage&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#on-asset-allocation&quot; id=&quot;markdown-toc-on-asset-allocation&quot;&gt;On asset allocation&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#questions-for-future-consideration&quot; id=&quot;markdown-toc-questions-for-future-consideration&quot;&gt;Questions for future consideration&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#acknowledgements&quot; id=&quot;markdown-toc-acknowledgements&quot;&gt;Acknowledgements&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#changelog&quot; id=&quot;markdown-toc-changelog&quot;&gt;Changelog&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;glossary&quot;&gt;Glossary&lt;/h2&gt;

&lt;p&gt;This essay uses a number of terms that may appear unfamiliar. I define most terms when I first use them, but this section provides an easy reference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;\(\boldsymbol\eta\) (eta)&lt;/strong&gt;: A numeric value for relative risk aversion that determines the shape of an agent’s utility function.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;\(\eta = 0\) implies a linear utility function, that is, no diminishing marginal utility.&lt;/li&gt;
  &lt;li&gt;\(\eta = 1\) implies logarithmic utility, which is the value most often used in theoretical models.&lt;/li&gt;
  &lt;li&gt;\(\eta &amp;gt; 1\) implies sub-logarithmic utility, which empirically describes most individuals’ preferences.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cash:&lt;/strong&gt; For our purposes, cash does not refer to currency, but to an investment that’s considered safe, such as short-term US Treasury bills. See “Risk-free rate.”&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Isoelastic_utility&quot;&gt;&lt;strong&gt;Isoelastic utility function:&lt;/strong&gt;&lt;/a&gt; A utility function that exhibits constant relative risk aversion. If someone has an isoelastic utility function, their risk aversion does not change based on how much money they have.&lt;/p&gt;

&lt;p&gt;The isoelastic utility function is defined by&lt;/p&gt;

&lt;p&gt;\begin{align}
\displaystyle \begin{cases}{\frac {c^{1-\eta }-1}{1-\eta }} &amp;amp; \eta \neq 1 \\ \ln(c) &amp;amp; \eta =1\end{cases}
\end{align}&lt;/p&gt;

&lt;p&gt;where \(c\) is consumption and \(\eta\) is a shape parameter.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Leverage_(finance)&quot;&gt;&lt;strong&gt;Leverage:&lt;/strong&gt;&lt;/a&gt; The process of borrowing money in order to invest more than 100% of one’s assets.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Marginal_utility&quot;&gt;&lt;strong&gt;Marginal utility:&lt;/strong&gt;&lt;/a&gt; Utility gained when someone gains a new good or service. For our purposes, we care about the marginal utility of money: how much utility a person gains from an extra dollar.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Risk_aversion#Relative_risk_aversion&quot;&gt;&lt;strong&gt;Relative risk aversion (RRA):&lt;/strong&gt;&lt;/a&gt; The extent to which an agent favors safe bets over risky ones. See the definition of \(\eta\).&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Risk-free_interest_rate&quot;&gt;&lt;strong&gt;Risk-free rate:&lt;/strong&gt;&lt;/a&gt; The interest rate investors can earn without taking on any risk, such as the rate earned by short-term US Treasury bills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Samuelson share (S):&lt;/strong&gt; The proportion of someone’s portfolio they invest in a risky asset.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;If \(S = 1\), they invest their entire portfolio in the asset.&lt;/li&gt;
  &lt;li&gt;If \(S &amp;lt; 1\), they invest \(S\) of their portfolio in the asset and the rest in cash.&lt;/li&gt;
  &lt;li&gt;If \(S &amp;gt; 1\), they use leverage to invest more than 100% of their portfolio in the asset.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Utility&quot;&gt;&lt;strong&gt;Utility function:&lt;/strong&gt;&lt;/a&gt; A function that assigns a real number to how much an agent values each possible outcome. In our case, a utility function takes in an amount of money and returns the utility of having that much money.&lt;/p&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;h2 id=&quot;what-makes-altruistic-investing-different-from-traditional-investing&quot;&gt;What makes altruistic investing different from traditional investing?&lt;/h2&gt;

&lt;p&gt;For the most part, altruists and traditional investors have the same incentives regarding how they should invest—they want to invest in the portfolio with the best return for an acceptable level of risk. Altruists and self-interested investors differ in two key ways:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Charitable causes probably have slower-diminishing marginal utility of money than individuals, so altruists should seek greater risk.&lt;/li&gt;
  &lt;li&gt;Self-interested people only care about how much money they have, while altruists care about how much money all other (value-aligned) altruists have.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And arguably a third way (more on this later):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Traditional individuals avoid certain market-beating factors like value and momentum for reasons that aren’t captured by a standard utility function, and altruists do not share these reasons.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Regarding the first two differences between altruists and other investors:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The first difference means altruists should be willing to use more leverage than individuals, to the extent that their marginal utility diminishes more slowly.&lt;/li&gt;
  &lt;li&gt;The second difference implies that altruists who are thoughtful about investing should not proportionally invest their money in the optimal portfolio; instead, they should attempt to push the overall pool of value-aligned philanthropic money in the direction of optimal.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;improving-on-conventional-investing-wisdom&quot;&gt;Improving on conventional investing wisdom&lt;/h2&gt;

&lt;p&gt;In &lt;a href=&quot;https://80000hours.org/2015/10/common-investing-mistakes-in-the-effective-altruism-community/&quot;&gt;Common Investing Mistakes in the Effective Altruism Community&lt;/a&gt;, Ben Todd describes some ways people can improve their investment decisions over the status quo. Before considering leverage, investors should understand and implement these suggestions.&lt;/p&gt;

&lt;p&gt;My essay will assume that readers agree with Todd’s section 3, “Not being diversified enough.” Quoting the most relevant portion:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Many people I’ve spoken to are almost fully invested in US equities. I think the rationale for this is that equities have been the best returning asset historically, so there’s no reason to own anything else. Another rationale is that since you can’t beat the market, you should put everything into equities.&lt;/p&gt;

  &lt;p&gt;But US stocks do not equal “the market”. If you try to tally up all global financial assets, you get something like this:&lt;/p&gt;

  &lt;pre&gt;&lt;code&gt;18% US stocks
13% Foreign developed stocks
5% Foreign emerging stocks
20% Global corporate bonds
14% 30 year bonds
14% 10 year foreign bonds
2% TIPs
5% REITs
5% commodities
5% gold
&lt;/code&gt;&lt;/pre&gt;

  &lt;p&gt;This represents the truly agnostic portfolio. If you think you have no ability the beat the market, then this is the portfolio with the best risk-return. 100% US equities is a huge bet on just one asset.&lt;/p&gt;

  &lt;p&gt;From 1973 to 2013, a portfolio like this returned 9.9% per year. In comparison, stocks returned 10.2%. So you only gave up a tiny 0.3% to switch to this portfolio.&lt;/p&gt;

  &lt;p&gt;In return, you had far lower risk. The volatility of the 100% equity portfolio was 15.6%, whereas this diversified portfolio had a volatility of only 8%. The maximum drawdown was also only -27% compared to -51% with equities. The wide diversification also makes you less vulnerable to unforeseen tail risks.&lt;/p&gt;

  &lt;p&gt;The much lower volatility means you could have levered up 2x and ended up with the same amount of volatility and same drawdowns as equities, but returns that were twice as high, at 20% per year.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Additionally, I would recommend readers review Todd’s section 6, “Not beating the market”:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Most people I speak to are sold on the “expert common sense” view that amateur investors shouldn’t try to beat the market, and should instead invest in index funds. I basically agree with this view. However, if you’ve got a little more time to put into investing, I think it’s worth considering the idea of tilting your investments towards assets that have value (are cheap based on metrics like P/E and P/B), high momentum (have gone up in the last 12 months, are above their 200-day moving average) and low volatility. If you do this within equities, I think it’s possible to beat the market by a couple of percentage points.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most people are skeptical when I claim that value and momentum investing beat the market, as I believe they ought to be. Justifying this claim against the efficient market hypothesis requires a high burden of proof. I cannot meet this burden in only a few paragraphs, so I will refer to some longer writings on the subject for anyone still curious.&lt;/p&gt;

&lt;p&gt;Asness et al. (2013) &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Value-and-Momentum-Everywhere&quot;&gt;Value and Momentum Everywhere&lt;/a&gt; examines eight asset classes (US stocks, UK stocks, continental European stocks, Japanese stocks, country equity index futures, government bonds, currencies, and commodity futures) and finds that value and momentum work in all eight.&lt;/p&gt;

&lt;p&gt;Two additional papers: &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Fact-Fiction-and-Momentum-Investing&quot;&gt;Fact, Fiction and Momentum Investing&lt;/a&gt;; and &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Fact-Fiction-and-Value-Investing&quot;&gt;Fact, Fiction and Value Investing&lt;/a&gt;. Targeted at skeptics, these articles offer high-level summaries and refer to other academic papers that provide evidence for the efficacy of momentum and value.&lt;/p&gt;

&lt;p&gt;The books &lt;a href=&quot;https://www.amazon.com/Quantitative-Value-Web-Site-Practitioners/dp/1118328078&quot;&gt;Quantitative Value&lt;/a&gt; and &lt;a href=&quot;https://www.amazon.com/Quantitative-Momentum-Practitioners-Momentum-Based-Selection/dp/111923719X&quot;&gt;Quantitative Momentum&lt;/a&gt;, the first by Wesley Gray and Tobias Carlisle, and the second by Wesley Gray and Jack Vogel, discuss value and momentum, respectively. These books discuss why value and momentum work and how the authors believe the strategies can best be implemented. For readers specifically interested in whether or how it is possible to beat the market, I believe &lt;em&gt;Quantitative Momentum&lt;/em&gt; gives a more concise and better-reasoned explanation, particularly in the first two chapters.&lt;/p&gt;

&lt;p&gt;One could argue that individuals should not avoid value and momentum and &lt;a href=&quot;https://alphaarchitect.com/2014/05/19/introduction-behavioral-finance-part-1-behavioral-bias/&quot;&gt;behave irrationally&lt;/a&gt; by doing so. I mostly agree with this, but individuals do have valid reasons for wanting to avoid value and momentum—they can underperform the broad market for long stretches of time, which many people may find undesirable for various reasons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;They care about maintaining social status, and thus are risk averse not only with respect to losing money, but also with respect to losing money &lt;em&gt;relative to&lt;/em&gt; their peers.&lt;/li&gt;
  &lt;li&gt;For professional investors, relative (but not absolute) underperformance can cause them to lose clients.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Altruists should not consider these compelling reasons to avoid market-beating strategies and should make efforts to sidestep the behavioral biases that lead people to avoid them.&lt;/p&gt;

&lt;h2 id=&quot;risk-and-leverage-for-traditional-investors&quot;&gt;Risk and leverage for traditional investors&lt;/h2&gt;

&lt;p&gt;According to standard economic theory, individuals have some level of risk tolerance that dictates how much risk they are willing to take on their investments. For our purposes, we can assume that individuals have some &lt;a href=&quot;https://en.wikipedia.org/wiki/Isoelastic_utility&quot;&gt;isoelastic&lt;/a&gt; utility function, which means there exists a constant \(\eta\) &amp;gt; 0 describing someone’s relative risk aversion (RRA), and their utility function is given by:&lt;/p&gt;

&lt;p&gt;\begin{align}
\displaystyle \begin{cases}{\frac {c^{1-\eta }-1}{1-\eta }} &amp;amp; \eta \neq 1 \\ \ln(c) &amp;amp; \eta =1\end{cases}
\end{align}&lt;/p&gt;

&lt;p&gt;An \(\eta\) of 0 corresponds to linear marginal utility of money (that is, no diminishing marginal utility). \(\eta\) = 1 indicates logarithmic utility, while \(\eta\) &amp;gt; 1 gives sub-logarithmic utility. Different individuals report substantially different RRAs, and different methods of empirically determining \(\eta\) give different results; but most people appear to have \(\eta\) values ranging from 1 to 4&lt;sup id=&quot;fnref:15&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;\(\eta\) can be used to determine how much risk someone is willing to take on an investment. For simplicity, suppose only two investments exist: a risk-free investment and a risky one. (The risky investment might not be a single asset, but a diversified combination of stocks and bonds.) Let \(R_f\) be the rate of return of the risk-free investment, and let the risky investment follow a log-normal distribution parameterized by \(\mu\) and \(\sigma\): that is, the logarithm of the distribution is a normal distribution with mean \(\mu\) and standard deviation \(\sigma\).&lt;/p&gt;

&lt;p&gt;Let Alice be an investor with RRA equal to \(\eta\). Alice can invest some proportion of her portfolio into her risky asset; Ayres and Nalebuff in their book &lt;a href=&quot;http://lifecycleinvesting.net&quot;&gt;Lifecycle Investing&lt;/a&gt; refer to this proportion as the Samuelson share.&lt;/p&gt;

&lt;p&gt;Under traditional model assumptions, the optimal Samuelson share is given by \(S = \displaystyle\frac{\mu - R_f}{\sigma^2 \eta}\). If S &amp;lt; 1, Alice should put the rest of her money into the risk-free asset; if S &amp;gt; 1, she should use leverage to invest more than 100% of her portfolio.&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;For most reasonable assumptions about future stock returns and for typical values of \(\eta\), S &amp;lt; 1. That is, most people do not want to invest more than 100% into stocks.&lt;/p&gt;

&lt;p&gt;However, Ayres and Nalebuff argue in &lt;a href=&quot;http://lifecycleinvesting.net&quot;&gt;Lifecycle Investing&lt;/a&gt; that people should treat their future income as an asset in their portfolio. Because young people will have much more money in the future than they do today, they should use leverage to diversify their investments across time. If you expect that 90% of your investable earnings lie in the future and you have all your current investments in stocks, your portfolio is effectively 10% stocks/90% future-earnings. If instead you get 2:1 leverage, you now have 20% in stocks and 80% in future earnings, which is more reasonable than 10%/90%. (Arguably you should have something like 5:1 leverage so that your portfolio contains half stocks and half future earnings; but that much leverage brings additional complications, so Ayres and Nalebuff only recommend going up to 2:1.)&lt;/p&gt;

&lt;h1 id=&quot;risk-aversion-for-altruistic-causes&quot;&gt;Risk aversion for altruistic causes&lt;/h1&gt;

&lt;p&gt;What RRA (\(\eta\)) should we use for charitable causes and how does it compare to individuals’ RRAs? This is a complex question and I cannot offer a complete answer within the scope of this essay. People often assume that altruists’ utility functions lie somewhere between logarithmic and linear (\(0 &amp;lt; \eta \le 1\)); e.g., Paul Christiano &lt;a href=&quot;https://rationalaltruist.com/2013/02/28/risk-aversion-and-investment-for-altruists/&quot;&gt;wrote&lt;/a&gt;, “I think logarithmic returns are fairly conservative for altruistic projects […], and that for most causes the payoffs are much more risk-neutral than that.” I consider this a reasonable assumption, but the question deserves more investigation. A full analysis is out of scope, but I will briefly model how utility might diminish for three of the most popular cause areas in effective altruism: global poverty, farm animal welfare, and existential risk.&lt;/p&gt;

&lt;h2 id=&quot;global-poverty&quot;&gt;Global poverty&lt;/h2&gt;

&lt;p&gt;In the long run, marginal utility for philanthropists cannot diminish any faster than the rate of the self-interested individual with the slowest-diminishing utility curve. If some person has a lower \(\eta\) than everyone else, once everyone’s welfare has been improved to the point where that person has the greatest marginal utility of money, (which might take a long time, but in theory will happen eventually), philanthropists cannot do worse than to simply give their money directly to that person.&lt;/p&gt;

&lt;p&gt;But the person with the steepest utility curve might not currently have the highest marginal utility of money, because there probably exist many other worse-off people who need money more. We care about how philanthropists should invest their money right now, which means we want to know what the utility curve of global poverty interventions currently looks like.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.givewell.org/charities/give-directly&quot;&gt;GiveWell rates GiveDirectly&lt;/a&gt; as one of the top charities working on global poverty. GiveDirectly simply gives cash directly to the world’s poorest people. GiveWell believes cash transfers do not represent the most cost-effective global poverty intervention, but that they probably aren’t much worse than the best interventions. And we can analyze GiveDirectly’s marginal utility easily because we can model its value in terms of the utility curves of the people receiving the cash. So for now, let’s assume GiveDirectly represents the pinnacle of global poverty charities.&lt;/p&gt;

&lt;p&gt;For simplicity, let’s assume GiveDirectly has no overhead costs and give the world’s poorest people money without encountering any obstacles. GiveDirectly’s goal is to provide as much utility to as many people as possible by increasing their wealth and thus allowing them to buy more stuff that makes their lives better—that is, GiveDirectly wants to maximize the aggregated individual consumption utility function:&lt;/p&gt;

&lt;p&gt;\begin{align}
\displaystyle\sum_1^n u_i(C(i))
\end{align}&lt;/p&gt;

&lt;p&gt;where \(i\) represents the \(i^{th}\) person, \(C(i)\) is the consumption level of the \(i^{th}\) person, and \(u_i()\) gives the utility function over consumption for the \(i^{th}\) person. (Note that this aggregate utility function is almost certainly not isoelastic, which means we cannot assign it a relative risk aversion parameter \(\eta\).)&lt;/p&gt;

&lt;p&gt;GiveDirectly’s mathematically-optimal strategy—assuming all individuals have the same value of \(\eta\) (which is false, but a common simplifying assumption in development economics)—is to give cash to the world’s poorest person until they have as much money as the second-poorest person; then give cash to the bottom two poorest people until they reach the level of the third-poorest person, and so on.&lt;/p&gt;

&lt;p&gt;The world’s poorest person and the world’s millionth-poorest person have almost the same amount of money, so giving a dollar to the latter does almost as much good as giving a dollar to the former. The world contains many more poor people than GiveDirectly can afford to help, so (given our simplifying assumptions) cash transfers have nearly linear marginal utility of money, at least for donors with less than a few hundred billion dollars&lt;sup id=&quot;fnref:16&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:16&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;We had to make several simplifying assumptions to reach this conclusion. I cannot say with confidence that the conclusion holds in the real world, but it does seem plausible that this simplified model roughly describes how the global poverty cause operates in terms of its diminishing marginal utility:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;In practice, GiveDirectly can give money only to people it can reach; but it can reach such a large set of people that this does not substantially affect its rate of diminishing utility.&lt;/li&gt;
  &lt;li&gt;GiveDirectly’s overhead costs scale sub-linearly with money donated&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;, so they don’t cause the utility curve to fall off more rapidly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Interventions that GiveWell considers more cost-effective than cash transfers, such as &lt;a href=&quot;https://www.givewell.org/international/technical/programs/deworming&quot;&gt;deworming&lt;/a&gt;, probably experience more rapidly diminishing marginal utility. Or, as GiveWell might say, these interventions have less funding capacity than cash transfers do. Let’s call these “superior” interventions, in the sense that they are superior to cash transfers. We can determine the rate of diminishing by answering three questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;How much better are superior global poverty interventions than cash transfers?&lt;/li&gt;
  &lt;li&gt;How much philanthropic money has already gone into superior global poverty interventions?&lt;/li&gt;
  &lt;li&gt;How much money would need to flow into global poverty before all superior interventions became fully funded?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I don’t know the answers to any of these questions, but let’s make up some on-the-face plausible answers and see what we get.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;GiveWell’s &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1zLmPuddUmKsy3v55AfG_e1Quk-ngDdNzW-FDx0T-Y94/edit#gid=1034883018&quot;&gt;cost-effectiveness analysis&lt;/a&gt; gives an answer for the first question: GiveWell estimates that its top charities do between 5 and 60 times as much good as GiveDirectly.&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; The 60x charity is an outlier and GiveWell’s estimate might not be accurate, so let’s suppose the current best charity is 10x as cost-effective as cash transfers.&lt;/li&gt;
  &lt;li&gt;The &lt;a href=&quot;https://www.usaid.gov/cj&quot;&gt;USAID budget&lt;/a&gt; is $40 billion. We can extrapolate this to perhaps $200 billion per year in global spending, and maybe $10 trillion in the past few decades. Prior to a few decades ago, GDPs were low enough and global poverty spending was unpopular enough that, for the sake of this rough estimate, we can assume prior spending was $0. Of this $10 trillion, maybe $1 trillion is spent effectively. (One could probably come up with a much better estimate with a few hours of research.)&lt;/li&gt;
  &lt;li&gt;In 2018, GiveWell moved $111 million to top charities other than GiveDirectly (see &lt;a href=&quot;https://files.givewell.org/files/metrics/GiveWell_Metrics_Report_2018.pdf&quot;&gt;GiveWell Metrics Report&lt;/a&gt;) and &lt;a href=&quot;https://www.givewell.org/about/impact&quot;&gt;moved&lt;/a&gt; similar amounts in 2015–2017. GiveWell probably can keep moving money for a while, and its top charities only represent a fraction of better-than-cash-transfers giving opportunities—we should include effectively-utilized money from other philanthropic organizations such as the Gates Foundation. So philanthropists can spend perhaps $100 billion before running out of such opportunities.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Assign the answers to these questions to symbols \(b\), \(m\), and \(m&apos;\), respectively. The value of \(\eta\) is given by&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;\begin{align}
\eta = \displaystyle\frac{\log b}{\log (1 + m’ / m)}
\end{align}&lt;/p&gt;

&lt;p&gt;For the given values of \(b\) = 10, \(m\) = $1 trillion, and \(m&apos;\) = $100 billion, we get \(\eta\) = 24, which indicates remarkably high risk aversion (most individuals have an RRA between 1 and 4&lt;sup id=&quot;fnref:15:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:15&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;). This implies that the 100 billionth dollar spent on global poverty did \(10^{24}\) times as much good as the trillionth dollar, which seems highly implausible.&lt;/p&gt;

&lt;p&gt;If instead we assume that global poverty interventions can absorb a full $1 trillion, we get a relative risk aversion of 3.3. Perhaps we also believe that few organizations spent their money ineffectively, and only $100 billion went to effective causes, which means over 90% of the value of superior global poverty interventions has yet to be captured; giving \(\eta\) = 1.&lt;/p&gt;

&lt;p&gt;Alternatively, we could ask what proportion \(p\) of superior giving opportunities philanthropists have funded already. This proportion equals \(\displaystyle\frac{m}{m + m&apos;}\), so in the first set of numbers given, this would equal $1 trillion / $1.1 trillion = 0.91.&lt;/p&gt;

&lt;p&gt;My intuition suggests that philanthropists have used up most of the superior opportunities, yet they still need much more funding to exhaust them—I would guess we are substantially less than 90% of the way there. (Some people who reviewed this essay had the opposite intuition—that the majority of superior opportunities have yet to be filled.) I do not have any particular expertise on this issue and find a wide range of answers plausible, which means \(\eta\) could plausibly take on many values. But it does seem unlikely that cash transfers would have a much smaller \(\eta\) than self-interested people’s, while other global poverty interventions would have a much larger \(\eta\).&lt;/p&gt;

&lt;p&gt;To view from another angle, and pump intuitions in a different way: How much money do we believe superior interventions can absorb relative to cash transfers? I would guess that superior interventions can absorb close to as much funding as cash transfers (to within a factor of two), which suggests that the two \(\eta\)s cannot differ by much.&lt;/p&gt;

&lt;p&gt;It might make more sense to model superior giving opportunities using a non-isoelastic utility curve (i.e., the value of \(\eta\) is not fixed). This would violate many of the assumptions in this section, requiring much more analysis.&lt;/p&gt;

&lt;p&gt;Ultimately, my weakly-informed guess is that superior global poverty interventions experience diminishing marginal utility at a slightly but not much higher rate than cash transfers.&lt;/p&gt;

&lt;h2 id=&quot;farm-animal-welfare&quot;&gt;Farm animal welfare&lt;/h2&gt;

&lt;p&gt;If we wish to fund interventions that attempt to reduce the incidence of factory farming or improve farm animals’ well-being, what diminishing marginal utility might we expect these interventions to experience?&lt;/p&gt;

&lt;p&gt;We can think about farm animal welfare as an extension of global poverty in at least two different ways, each of which results in a different model and a different answer for \(\eta\).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The first way:&lt;/strong&gt; We can consider factory farming interventions in the same way we examined superior global poverty interventions. Some amount of money has been spent on preventing factory farming; and some additional amount of money can be spent before efforts on farm animal welfare are no more cost-effective than cash transfers. We can answer the same three questions we raised in the previous section and find the value of \(\eta\) for factory farming interventions.&lt;/p&gt;

&lt;p&gt;We probably believe that the best farm animal interventions are more cost-effective than the top global poverty interventions, meaning we assign a greater value to \(b\). All else equal, a larger \(b\) results in a larger \(\eta\)—that is, greater risk aversion.&lt;/p&gt;

&lt;p&gt;Pushing in the other direction, probably only a small fraction of value in the anti-factory-farming space has been captured already, which means the proportion \(p = \frac{m}{m + m&apos;}\) is small.&lt;/p&gt;

&lt;p&gt;Some example values:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;if \(b = 100\) and \(p = 1/2\), then \(\eta = 6.6\)&lt;/li&gt;
  &lt;li&gt;if \(b = 100\) and \(p = 1/10\), then \(\eta = 2\)&lt;/li&gt;
  &lt;li&gt;if \(b = 100\) and \(p = 1/100\), then \(\eta = 1\)&lt;/li&gt;
  &lt;li&gt;if \(b = 10\) and \(p = 1/1000\), then \(\eta = 0.33\)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The second way:&lt;/strong&gt; When considering cash transfers, we looked at how much money it would take to improve the station of the N worst-off people so that they become as well off as the (N+1)&lt;sup&gt;th&lt;/sup&gt;-worst off person. We can extend this model by looking at a set of sentient beings including both humans and factory-farmed animals (and naturally we could extend this to include other beings as well, but for now let’s just consider those two categories). As a weird but useful and sort-of-correct abstraction, assume factory-farmed animals consume money in the same way that humans do, but that they’re really, really poor, so they have extremely high marginal utility of money. Then philanthropists should give money to help animals until all the animals are as well off as globally poor humans. According to this model, donations to farmed animal welfare have nearly linear marginal utility.&lt;/p&gt;

&lt;p&gt;This model seems plausible in some respects, but the first model requires unreasonably small values of \(b\) and/or \(p\) to produce a similar answer. Both of these models have several obvious shortcomings. They are only meant to be a rough start, not a definitive answer.&lt;/p&gt;

&lt;h2 id=&quot;existential-risk&quot;&gt;Existential risk&lt;/h2&gt;

&lt;p&gt;Work on existential risk reduction often involves doing research: exploring ways of neutralizing engineered super-viruses, studying how to create stable AI systems, etc. Other types of work can be done, but I will focus on research for now.&lt;/p&gt;

&lt;p&gt;Effective researchers begin by focusing on the most promising potential directions. As funding for a field increases, researchers begin pursuing less promising avenues. How quickly does the value of research diminish?&lt;/p&gt;

&lt;p&gt;Owen Cotton-Barratt claims in an article, &lt;a href=&quot;http://www.fhi.ox.ac.uk/law-of-logarithmic-returns/&quot;&gt;The Law of Logarithmic Returns&lt;/a&gt;, that the returns to research grow logarithmically with resources invested. He refers to Nicholas Rescher’s book &lt;em&gt;Scientific Progress&lt;/em&gt;: “starting with the observation that progress (counted in terms of the number of “first rate” discoveries) goes linearly with time while resources increase exponentially, [Rescher] deduces that the underlying behaviour is a logarithmic return to resources.” Cotton-Barratt presents some additional empirical evidence suggesting that returns to research scale logarithmically. His follow-up article, &lt;a href=&quot;http://www.fhi.ox.ac.uk/theory-of-log-returns/&quot;&gt;Theory Behind Logarithmic Returns&lt;/a&gt;, provides theoretical justification for this observation.&lt;/p&gt;

&lt;p&gt;Additionally, I spoke with some people involved in existential risk research, and they (independently) agreed that research probably produces logarithmic returns.&lt;/p&gt;

&lt;p&gt;Logarithmic returns entail \(\eta = 1\). Investors with logarithmic utility have greater risk tolerance than most individual investors by a factor of perhaps 1.5 to 4, but still much less risk tolerance than someone with near-linear utility (such as a donor to GiveDirectly).&lt;/p&gt;

&lt;p&gt;Note that this applies to any research-based cause, which could include potential existential risks such as AI safety, biosecurity, and climate change, but also can include other cause areas such as disease prevention, macroeconomic policy, and anti-aging.&lt;/p&gt;

&lt;h1 id=&quot;risk-aversion-for-altruists&quot;&gt;Risk aversion for altruists&lt;/h1&gt;
&lt;h2 id=&quot;uncorrelated-small-donors-are-nearly-risk-neutral&quot;&gt;Uncorrelated small donors are nearly risk-neutral&lt;/h2&gt;

&lt;p&gt;Suppose you are a philanthropist with a relatively small amount of money, and your investments have no correlation with other philanthropists’. (This second assumption rarely holds true in practice, but let’s take it as a given for now.) If you want to donate to a cause that already has much more money than you do, you have nearly linear marginal utility of money, which means you should behave nearly risk-neutrally.&lt;/p&gt;

&lt;p&gt;As a numerical example, let’s say your preferred cause area has $1 billion in funding and you currently have $1 million. Further suppose that this cause has logarithmic utility of money, which means $1 billion produces log(1 billion) = 20.723 utils (where a util is an abstract measurement of utility). If you add your $1 million, your cause now gets log(1,001,000,000) = 20.724 utils, giving (approximately) an extra .001 utils.&lt;/p&gt;

&lt;p&gt;If the cause as a whole doubles its funding to $2 billion, the utility will increase from 20.723 to 21.416—so the first $1 billion is worth far more than the second billion. If you double your $1 million to $2 million, the utility of the cause increases to 20.725. Your first million provides .001 utils, and your second million provides an additional .001 utils. The second million does provide less value, but only slightly less—if we use more significant figures, we can see that the first million gives .00999 utils and the second million gives .00998.&lt;/p&gt;

&lt;p&gt;To look at it another way, suppose you have the choice between getting either (A) $X with certainty, or (B) 50% chance of getting $2 million and 50% of getting nothing. At what value of X are you indifferent between A and B? Most self-interested people would probably say something in the range of $50,000 to $300,000, with some people going somewhat higher or lower.&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; But in this example, your value of X would be $999,500.&lt;/p&gt;

&lt;h2 id=&quot;how-risk-averse-are-large-donors&quot;&gt;How risk-averse are large donors?&lt;/h2&gt;

&lt;p&gt;While uncorrelated small donors experience nearly linear marginal utility of money, large donors do not. For our purposes, a large donor is one who can fund a significant fraction of their preferred cause area. A donor who is the sole funder of a cause experiences diminishing marginal utility at the same rate as the cause itself, as discussed in &lt;a href=&quot;#risk-aversion-for-altruistic-causes&quot;&gt;Risk aversion for altruistic causes&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;correlated-small-donors-look-like-large-donors&quot;&gt;Correlated small donors look like large donors&lt;/h2&gt;

&lt;p&gt;Unlike individual investors, small donors do not care only about how their own portfolios perform. Altruists who want their causes to receive more funding also want fellow donors’ investments to perform well.&lt;/p&gt;

&lt;p&gt;Quoting &lt;a href=&quot;http://rationalaltruist.com/2013/02/28/risk-aversion-and-investment-for-altruists/&quot;&gt;Paul Christiano&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[T]he fact that I am a small piece of the charitable donations to a cause shouldn’t matter. My risk is well-correlated with the risk of other investors, and if I lose 10% of my money in a year, other investors will also lose 10% of their money, and less money will be available for charitable giving. This holds regardless of whether a cause has a million donors or just one.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In effect, many small donors with highly correlated investments have similar risk preferences to a single large investor.&lt;/p&gt;

&lt;h2 id=&quot;time-diversification&quot;&gt;Time diversification&lt;/h2&gt;

&lt;p&gt;The book &lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Lifecycle Investing&lt;/a&gt; explains the concept of time diversification (an excerpt from the first chapter, which introduces the concept, is &lt;a href=&quot;http://lifecycleinvesting.net/LifecycleInvesting_book_excerpt.pdf&quot;&gt;available online&lt;/a&gt;). The basic idea: Treat your future income as an asset in your investment portfolio. Suppose you have $10,000 in investments and expect to invest an additional $90,000 of your earnings over the next 20 years. In that case, you have invested only 10% of your lifetime savings. If you invest in a standard 60% stocks/40% bonds portfolio, you will have $6,000 in stocks today but $60,000 by the time you retire (adjusted for inflation and market returns). That means you have much more exposure to market fluctuations in 2040 than in 2020. You can solve this by diversifying across time: applying leverage to your current $10,000 portfolio while holding more bonds or cash in 2040.&lt;/p&gt;

&lt;p&gt;The principle of time diversification applies to philanthropists as well, in two different ways:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;You should consider the donations you can make with your future income.&lt;/li&gt;
  &lt;li&gt;The future might contain more value-aligned altruists who will increase the size of the philanthropic money pool.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Lifecycle Investing&lt;/em&gt; provides information on how to determine optimal leverage at each point in time given various assumptions, particularly in chapters 3 and 4.&lt;/p&gt;

&lt;p&gt;According to &lt;em&gt;Lifecycle Investing&lt;/em&gt;, investors should calculate their optimal Samuelson share. Then they should consider the discounted present value of their future income as an asset in their portfolio, and hold stocks in the correct proportion of their lifetime portfolio. For example, an investor with $10,000 in stocks and $90,000 in discounted future earnings&lt;sup id=&quot;fnref:20&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:20&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;, and with a Samuelson share of 0.6, should ideally hold 60% in stocks. Therefore, they should theoretically invest with 6:1 leverage to get their stock holding up to $60,000. Their overall portfolio would then look like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt; $60,000 stocks
-$50,000 debt
 $90,000 future cash
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;which can be simplified to&lt;/p&gt;

&lt;pre&gt;&lt;code&gt; $60,000 stocks
 $40,000 cash
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;giving the investor their desired portfolio.&lt;/p&gt;

&lt;p&gt;Similarly, if you are the sole funder of a cause, you currently have $10 million, and you expect the cause’s funding in the future to increase to $30 million in present-value dollars, then you can treat that extra $20 million as cash in your portfolio, and increase your current portfolio’s risk by 3x. If you have a Samuelson share of 1, increase your risky holdings from 100% to 300% (that is, 3:1 leverage); at a Samuelson share of 0.5, increase risky holdings from 50% to 150%.&lt;/p&gt;

&lt;p&gt;For already-popular causes such as global poverty, we might expect future funding to continue increasing at the historical rate. If global poverty donations grow with GDP, we should treat this as a bond in the global poverty investment portfolio that pays the GDP growth rate in interest. Assuming $20 billion in annual effectively-directed global poverty donations (from the rough estimate given &lt;a href=&quot;#global-poverty&quot;&gt;above&lt;/a&gt;) and 1% real GDP growth, we can treat global poverty donations as a $2 trillion asset that grows at 1% real per year and has the same risk level as GDP growth. If we look only at the GiveWell money moved of ~$100 million and assume it will grow with GDP (historically it has grown faster than that), this suggests we treat future GiveWell donations as an asset worth $10 billion.&lt;/p&gt;

&lt;p&gt;Many causes that effective altruists prioritize do not see much mainstream interest, but might expect to get much more interest in the future. For example, a few years ago, AI safety had hardly any funding, but recently has been growing in popularity. For such causes, philanthropic investors may wish to substantially lever up their portfolios to create time diversification. But also consider that if future funding looks uncertain, investors may wish to take on less risk.&lt;/p&gt;

&lt;h2 id=&quot;adjusting-for-other-altruists-investing-behavior&quot;&gt;Adjusting for other altruists’ investing behavior&lt;/h2&gt;

&lt;p&gt;Suppose there exist two risky assets A and B. Almost all investors invest in A, and hardly anyone invests in B, but both provide similar risk-adjusted return. An ordinary investor should maximally diversify by splitting their assets into approximately half A, half B.&lt;/p&gt;

&lt;p&gt;Altruists care how much money other value-aligned altruists have. As discussed above, they wish to reduce their assets’ correlation with other altruists’, not just with their own. Therefore, the optimal allocation would not be 50% A/50% B, but perhaps 100% B. If most people heavily overweight asset A, then an investment-minded philanthropist can move the “altruistic portfolio” closer to optimal by investing only in B.&lt;/p&gt;

&lt;p&gt;Additionally, the pool of philanthropic money has some optimal Samuelson share. If the aggregate altruistic portfolio does not reach that risk target, then thoughtful investors can compensate by over-leveraging to make up for others’ insufficient leverage. (If most altruists take on too much risk, one can compensate by holding extra cash, but this seems unlikely in practice.)&lt;/p&gt;

&lt;p&gt;These considerations matter a great deal. If your portfolio only makes up a small part of the altruistic portfolio, this suggests that you should hold no conventional assets and instead invest your entire portfolio in something weird-looking in order to maximize uncorrelated return. It also might mean you should take on a huge amount of leverage and incur a near-certain probability of bankrupting the altruistic portion of your portfolio in order to increase the overall risk and return of the altruistic money pool.&lt;/p&gt;

&lt;p&gt;Ideally, altruists would agree about optimal investment choices. Suppose Alice invests in 100% US equities while Bob goes 100% short on US equities. If they share values, they have a mutual interest in ensuring that the other invests optimally. One of them must be making a mistake, so they should come to an agreement about how they both should invest. Insofar as it is possible, rather than unilaterally attempting to shift the altruistic money pool, we should cooperate with other philanthropists and come to a consensus on how to invest.&lt;/p&gt;

&lt;p&gt;Although most altruists do not diversify well or take on enough risk, it is plausible that large philanthropists—who hold a disproportionate share of the altruistic money—invest better. Investment-minded altruists should particularly care about how large donors invest. If they already invest optimally, that means small donors have less of a reason to try to shift the overall investment pool.&lt;/p&gt;

&lt;h2 id=&quot;what-if-rra-varies-over-time&quot;&gt;What if RRA varies over time?&lt;/h2&gt;

&lt;p&gt;If a philanthropist’s utility curve does not follow the isoelastic utility function defined above, they do not have a single fixed value of \(\eta\). If \(\eta\) increases or decreases over time, we cannot simply look at the optimal amount of investment risk to use at a particular point in time and then maintain it forever. Furthermore, we do not necessarily want to vary leverage over time based on the value of \(\eta\).&lt;/p&gt;

&lt;p&gt;First, some simplifying assumptions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;You will invest your money until time \(t\), at which point you will donate it. Let \(\eta_t\) be your RRA at this time. You will donate all your money at once, rather than donating over time according to some schedule.&lt;/li&gt;
  &lt;li&gt;You do not have enough money to meaningfully shift the utility curve, so the value of \(\eta_t\) does not change as a result of your donation.&lt;/li&gt;
  &lt;li&gt;You know the optimal time \(t\) to donate. (In actuality, the optimal value of \(t\) depends on the shape of the utility curve, so we cannot know it without first knowing \(\eta_t\).)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Under these assumptions, the marginal utility of your donation depends only on \(\eta_t\), not on the value of \(\eta\) at any other time. \(\eta_t\) determines the relative risk aversion of your donation, which means it tells you how much risk you should take on. The desired level of risk is given by the Samuelson share formula:&lt;/p&gt;

&lt;p&gt;\begin{align}
S = \displaystyle\frac{\mu - R_f}{\sigma^2 \eta_t}
\end{align}&lt;/p&gt;

&lt;p&gt;(Recall that \(\mu\) = expected return of investment, \(\sigma\) = standard deviation, and \(R_f\) = risk-free rate.)&lt;/p&gt;

&lt;h1 id=&quot;donors-can-decrease-correlation&quot;&gt;Donors can decrease correlation&lt;/h1&gt;

&lt;p&gt;Because most donors’ investments are correlated, their utility curves fall off more rapidly than if they were independent. But it is not destined to be this way.&lt;/p&gt;

&lt;p&gt;If correlated donors experience logarithmic (or worse) marginal utility and uncorrelated (small) donors experience nearly linear marginal utility, donors should care a &lt;em&gt;lot&lt;/em&gt; about finding uncorrelated sources of income.&lt;/p&gt;

&lt;p&gt;What sorts of uncorrelated assets can we find, and what rate of return can we expect from them? In this section, I will survey a few asset classes worth considering. This is not meant to be an exhaustive list, nor will I describe in detail the pros and cons of each; this list is just meant as a starting point. For more on some of these diversifiers, see AQR (2018), &lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/It-Was-the-Worst-of-Times-Diversification-During-a-Century-of-Drawdowns&quot;&gt;It Was the Worst of Times: Diversification During a Century of Drawdowns&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note that if all altruists started investing in these diversifying asset classes, they would no longer provide as substantial diversification benefits. But I don’t expect that to happen in the near future.&lt;/p&gt;

&lt;p&gt;(Individual investors should care just as much about decreasing correlation by diversifying. The difference is that ordinary investors can simply hold an optimal portfolio; but altruists care about other altruists’ money, so they may want to overweight diversifiers relative to what an individual investor would do.)&lt;/p&gt;

&lt;h2 id=&quot;bonds&quot;&gt;Bonds&lt;/h2&gt;

&lt;p&gt;Historically, bonds have had near-zero correlation to stocks (even anti-correlation at times). Most investors already hold bonds as well as stocks, however, so adding bonds to our portfolio does not do much to reduce our correlation to other donors.&lt;/p&gt;

&lt;h2 id=&quot;commodities&quot;&gt;Commodities&lt;/h2&gt;

&lt;p&gt;Fewer donors invest in commodities. Commodities provide low correlations to stocks (but usually still positive), and by carefully  selecting good investment vehicles, they can provide positive (but probably not great) return. For more, see Levine et al. (2016), &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Commodities-for-the-Long-Run&quot;&gt;Commodities for the Long Run&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;managed-futures-trendfollowing&quot;&gt;Managed futures (trendfollowing)&lt;/h2&gt;

&lt;p&gt;A better idea than bonds or commodities might be &lt;a href=&quot;https://en.wikipedia.org/wiki/Managed_futures_account&quot;&gt;managed futures&lt;/a&gt;. These are actively-managed long/short strategies that intend to provide uncorrelated returns. Managed futures funds usually use trendfollowing: going long on assets that have been trending upward over a certain time horizon, and going short on assets that have been trending downward. When I discuss managed futures in this essay, I am talking specifically about ones that use trendfollowing strategies.&lt;/p&gt;

&lt;p&gt;Hurst et al. found promising results in &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Demystifying-Managed-Futures&quot;&gt;Demystifying Managed Futures (2013)&lt;/a&gt;. According to a backtest described in the paper, a diversified managed futures strategy (investing across stocks, bonds, commodities, and currencies) produced a return of 19.4% with a standard deviation of 10.8%, and an alpha over stock, bond, and commodity indexes of 17.4% (figure 2, page 49). That is, the managed futures strategy produced a 17.4% return that was not explained by the return of stocks, bonds, or commodities. This suggests that managed futures provide a strong uncorrelated source of returns not captured by most investors.&lt;/p&gt;

&lt;p&gt;I highly doubt that actual investors can realize a risk-adjusted return as high as what the Hurst paper found in its backtest, but I do expect managed futures strategies to be worth pursuing in some circumstances. For more on how managed futures strategies behave and why they might provide alpha, see &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Demystifying-Managed-Futures&quot;&gt;the  paper&lt;/a&gt;. And for further evidence, see Hurst et al. (2014), &lt;a href=&quot;https://www.trendfollowing.com/whitepaper/Century_Evidence_Trend_Following.pdf&quot;&gt;A Century of Evidence on Trend-Following Investing&lt;/a&gt;, which tests managed futures trendfollowing strategies going back to 1880.&lt;/p&gt;

&lt;p&gt;Going forward, I expect managed futures to perform worse for three primary reasons:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The 19.4% figure does not include fund fees or transaction costs.&lt;/li&gt;
  &lt;li&gt;Managed futures strategies depend on interest rates, which are much lower now than they were over the studied period.&lt;/li&gt;
  &lt;li&gt;More investors today use managed futures strategies than did in the 80’s, 90’s, or early 00’s. (But note that managed futures strategies have gotten less popular over the past decade.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At the same time, Hurst et al. offer some reasons to be optimistic about future performance:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Transaction costs have been decreasing over time as markets become more liquid.&lt;/li&gt;
  &lt;li&gt;Investors have access to markets that were not previously investable, such as emerging market equities and emerging market currencies.&lt;/li&gt;
  &lt;li&gt;Even assuming muted future returns, trendfollowing still provides valuable diversification.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The paper provides some quantitative analysis on how managed futures benefit a portfolio even if they produce much worse returns than they did historically.&lt;/p&gt;

&lt;p&gt;Why do managed futures strategies provide alpha? Why have they not been arbitraged away? The answer is not known for certain, but the most reasonable hypothesis is that the majority of investors avoid it because it can underperform their benchmark for long periods of time. I cannot do this hypothesis justice, but in brief, consider this performance chart of a managed futures mutual fund (EQCHX) compared against the S&amp;amp;P 500 over the five-year period from 2014 to 2019 (the dark blue line is EQCHX and the light blue line is the S&amp;amp;P):&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/eqchx-sp500.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;As discussed &lt;a href=&quot;#what-makes-altruistic-investing-different-from-traditional-investing&quot;&gt;above&lt;/a&gt;, most investors benchmark their returns against an index like the S&amp;amp;P 500, and would be unwilling to suffer five years of dramatic underperformance.&lt;/p&gt;

&lt;p&gt;For more on this hypothesis, see Alpha Architect (2015), &lt;a href=&quot;https://alphaarchitect.com/2015/08/17/the-sustainable-active-investing-framework-simple-but-not-easy/&quot;&gt;The Sustainable Active Investing Framework: Simple, But Not Easy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Will managed futures continue to work? AQR (2018), &lt;a href=&quot;https://www.aqr.com/Insights/Research/White-Papers/Trend-Following-in-Focus&quot;&gt;Trend Following in Focus&lt;/a&gt;, discusses this question and concludes that they probably will. In short, managed futures strategies do not appear over-subscribed, and although they have performed poorly over the past few years, similar stretches of poor performance have occurred many times in the past.&lt;/p&gt;

&lt;p&gt;Eventually, of course, managed futures strategies will no longer provide uncorrelated returns. (Even if only a small percentage of investors ever adopts such strategies, over time those investors will become richer until they have enough money to eliminate the market inefficiency.) But until then, it appears that small donors with the ability to invest in managed futures can get nearly linear marginal utility of money by doing so.&lt;/p&gt;

&lt;h2 id=&quot;buy-and-hold-longshort-strategies&quot;&gt;Buy-and-hold long/short strategies&lt;/h2&gt;

&lt;p&gt;Altruists might consider pursuing buy-and-hold long/short strategies (as opposed to managed futures, which are actively-traded long/shorts). That is, buy some asset that has moderate but not perfect correlation to what you believe most philanthropists own and short-sell the standard philanthropists’ portfolio. Say you believe all philanthropists put all their money in the S&amp;amp;P 500&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;. You could buy a global excluding-US stock market index, which has high but not perfect correlation with the S&amp;amp;P 500, and then short the S&amp;amp;P. As a result, you can get a return stream that’s uncorrelated with other altruistic investors. For example, if the S&amp;amp;P and the global ex-US market are correlated with r=0.8, you could put 500% of your money in the global ex-US market (that is, 5:1 leverage) and short 400% of the S&amp;amp;P 500, for a net 100% market exposure with zero correlation to the S&amp;amp;P. (That is not actually how the math works, but for the purposes of this example, it doesn’t matter.)&lt;/p&gt;

&lt;p&gt;At first glance, this seems wasteful: you are shorting an asset that other altruists hold, so your investments cancel each other. But this is still a good idea given the assumption that the S&amp;amp;P-buyers are making a mistake. By using a long/short strategy, you can increase the combined return of all philanthropic money without increasing risk. In other words, you are moving the pool of philanthropic money closer to an optimally-diversified portfolio, and you are doing so more effectively than if you just bought the global ex-US index. Of course, it would be even better to convince other altruists to improve their portfolio holdings.&lt;/p&gt;

&lt;p&gt;I suspect (although do not have much evidence) that philanthropists under-weight quite a few asset classes, such as global stocks (especially emerging-market stocks), gold, commodities, and emerging-market bonds. A philanthropic investor might want to buy some of these assets while shorting overweighted ones. An investor might particularly want to do this if they do not believe that managed futures perform as well as I suggested in the previous section.&lt;/p&gt;

&lt;p&gt;Note that individual investors would never want to use this sort of strategy. The only reason this type of long/short might make sense is because you believe some investors’ money brings as much value to your utility function as your own money does, but those other investors are under-diversified, so you want to make up for that lack of diversification.&lt;/p&gt;

&lt;h2 id=&quot;longshort-factor-premia&quot;&gt;Long/short factor premia&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.aqr.com/Insights/Research/Alternative-Thinking/It-Was-the-Worst-of-Times-Diversification-During-a-Century-of-Drawdowns&quot;&gt;AQR (2018)&lt;/a&gt; identified long/short factor premia as the best diversifier according to its backtests. In brief, a long/short factors strategy invests in market-beating factors (such as value and momentum, discussed &lt;a href=&quot;#improving-on-conventional-investing-wisdom&quot;&gt;above&lt;/a&gt;) by going long on “good” assets while going short on “bad” assets (according to the factors), producing a market neutral position. For a detailed analysis of this strategy, see Ilmanen et al. (2019), &lt;a href=&quot;https://www.aqr.com/Insights/Research/Working-Paper/How-Do-Factor-Premia-Vary-Over-Time-A-Century-of-Evidence&quot;&gt;How Do Factor Premia Vary Over Time? A Century of Evidence&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;AQR has a &lt;a href=&quot;https://funds.aqr.com/funds/alternatives/aqr-style-premia-alternative-fund/qspnx&quot;&gt;Style Premia Alternative Fund (QSPNX)&lt;/a&gt; that appears to attempt to follow the strategies identified in Ilmanen et al., although I have only briefly read the fund literature. However, this fund has a $1 million investment minimum, and I am not aware of any similar funds that retail investors can access. The AQR and Ilmanen papers suggest that sufficiently-wealthy altruistic investors may want to invest in a long/short factor fund as a diversifier, and QSPNX might be a reasonable way to do that.&lt;/p&gt;

&lt;h2 id=&quot;startups&quot;&gt;Startups&lt;/h2&gt;

&lt;p&gt;Another low-correlation investment opportunity, suggested by &lt;a href=&quot;https://rationalaltruist.com/2013/02/28/risk-aversion-and-investment-for-altruists/&quot;&gt;Paul Christiano&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[I]f you start or invest in a small company, your payoff will depend on that company’s performance (which is typically quite risky but only weakly correlated with the market). […] This special case is only possible because the entrepreneur or investor is putting in their own effort, and moral hazard makes it hard to smooth out all of the risk across a larger pool (though VC funds will invest in many startups). You shouldn’t expect to find a similar situation in investments, except when you are providing insight which you trust but the rest of the market does not (thereby preventing you from insuring against your risk).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2 id=&quot;mission-hedging&quot;&gt;Mission hedging&lt;/h2&gt;

&lt;p&gt;Some altruists have discussed the concept of &lt;a href=&quot;https://forum.effectivealtruism.org/posts/iZp7TtZdFyW8eT5dA/a-generalized-strategy-of-mission-hedging-investing-in-evil&quot;&gt;mission hedging&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;How should a foundation whose only mission is to prevent dangerous climate change invest its endowment? Surprisingly, in order to maximize expected utility, it might use ‘mission hedging’ investment principles and invest in fossil fuel stocks. When oil companies perform well (which means they are contributing more to climate change), the anti-climate change foundation will have more money. When more fossil fuels are burned, fossil fuel stocks go up, thus giving the foundation more money. When fewer fossil fuels are burnt and fossil fuels stocks go down, the foundation will have less money, but it does not need the money as much anymore.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A philanthropist could mission hedge and obtain low correlation with other altruistic investors by buying a mission-hedge investment (such as fossil fuel stocks) while simultaneously shorting the broad market. This would not provide ideal diversification because altruists who invest in index funds will still have some exposure to the same risk factors as the long/short mission hedge strategy, but it will reduce correlation.&lt;/p&gt;

&lt;p&gt;My impression is that altruistic investors should prioritize maximizing risk-adjusted return, then focus on reducing correlation with other altruists. Mission hedging provides only tertiary value and should be avoided insofar as it impedes the first two goals. But I have not studied mission hedging, and I could be wrong.&lt;/p&gt;

&lt;h2 id=&quot;what-if-everyone-did-this&quot;&gt;What if everyone did this?&lt;/h2&gt;

&lt;p&gt;If you invest in a particular source of return that’s uncorrelated to the broad market, and many other philanthropists do, too, you have formed a pool of correlated investors, which means your utility curve behaves as if everyone in that pool is a single large donor. Naturally, you would prefer to invest in an asset that’s not correlated with any other altruist’s investments. But an uncorrelated asset with, say, $10 million in philanthropic money still provides much greater marginal utility than the S&amp;amp;P 500, in which orders of magnitude more value-aligned philanthropists already invest.&lt;/p&gt;

&lt;p&gt;I doubt that huge sums of philanthropic money will flow into unusual investments like managed futures in the near future, which means those people who do make such investments might experience nearly linear marginal utility of money. That said, most of the arguments in this essay rely on the assumption that the overwhelming majority of altruistic investors will not change their behavior. For the most part, the proposals in this essay eventually will stop being a good idea if enough value-aligned philanthropists adopt them.&lt;/p&gt;

&lt;h1 id=&quot;return-expectations&quot;&gt;Return expectations&lt;/h1&gt;

&lt;p&gt;The Samuelson share formula requires four parameters: \(R_f\), \(\mu\), \(\sigma\), and \(\eta\). We have discussed \(\eta\) in some depth, but have not addressed the values of the other three parameters. What might we expect?&lt;/p&gt;

&lt;p&gt;Investment firm Research Affiliates (RAFI) publishes an estimate of &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation&quot;&gt;return expectations&lt;/a&gt; based on solid methodology. RAFI updates its predictions regularly based on changing market conditions, but as of this writing, it predicts a 0.4% return after inflation with 14.4% standard deviation for US large-cap stocks over the next 10 years (with a 95% confidence interval of -3.4% to 4.1% return). But of course, investors should diversify beyond US stocks. RAFI estimates that the optimally-diversified portfolio will provide 1.6% real return with 4.0% standard deviation. Investors who hold this portfolio may want to use much more leverage than investors in US stocks, if only because it has a much lower standard deviation. Alternatively, RAFI’s optimal (un-levered) portfolio at the 12% volatility level has 4.5% expected real return; this portfolio has slightly worse return-to-risk ratio (0.375 instead of 0.4), but requires less leverage.&lt;/p&gt;

&lt;p&gt;We care about \(\mu - R_f\), not just \(\mu\). We subtract the risk-free rate \(R_f\) because (1) if we hold cash, that cash can earn the risk-free rate; and (2) borrowing money to use leverage requires paying the risk-free rate in interest (at least in theory, see &lt;a href=&quot;#cost-of-leverage&quot;&gt;Cost of leverage&lt;/a&gt;). Investors can earn \(R_f\) without taking on any risk, so in some sense it doesn’t count, and we should subtract it out. The current risk-free rate after inflation nearly equals zero, so we can assume \(R_f = 0\) without losing much precision.&lt;/p&gt;

&lt;p&gt;As discussed &lt;a href=&quot;#improving-on-conventional-investing-wisdom&quot;&gt;above&lt;/a&gt;, investors probably can outperform a fully diversified portfolio by tilting toward value and momentum. Most implementations of value and momentum (such as the &lt;a href=&quot;https://investor.vanguard.com/etf/profile/VTV&quot;&gt;Vanguard Value ETF&lt;/a&gt;) track a broad market index and only use a weak value or momentum tilt—they have low &lt;a href=&quot;https://alphaarchitect.com/2016/09/13/the-expensive-lesson-of-closet-indexing-avoid-low-active-share-and-high-expenses/&quot;&gt;active share&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What kind of returns might we expect from a high-active-share value/momentum fund? Investment firm Alpha Architect maintains an index it calls the &lt;a href=&quot;https://alphaarchitect.com/2017/06/06/the-value-momentum-trend-philosophy/&quot;&gt;Global Value Momentum Trend Index&lt;/a&gt;, investable through the ETF &lt;a href=&quot;https://etfsite.alphaarchitect.com/vmot/&quot;&gt;VMOT&lt;/a&gt;.&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; This index only invests in the top ~5% of stocks matching its criteria. Comparing Alpha Architect’s backtest from 1992 to 2017 against index returns from the &lt;a href=&quot;http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html&quot;&gt;Ken French data library&lt;/a&gt;, VMOT would have returned 17.4% with a 13.6% standard deviation (before subtracting fees and inflation), versus 9.3%/15.0% for the US + Europe stock markets (which makes for a reasonable benchmark).&lt;/p&gt;

&lt;p&gt;For a longer backtest, I attempted to approximate VMOT’s methodology using the Ken French Data Library entries on &lt;a href=&quot;http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/Data_Library/det_10_port_form_pr_12_2.html&quot;&gt;momentum&lt;/a&gt; and &lt;a href=&quot;http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/Data_Library/det_port_form_ep.html&quot;&gt;value (earnings-to-price)&lt;/a&gt;, which go back to 1952&lt;sup id=&quot;fnref:29&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:29&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;. These data only include US stocks, and we have theoretical reasons to expect the simplified investment strategy represented by these data to (slightly) underperform VMOT&lt;sup id=&quot;fnref:30&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:30&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;. Over the full sample back to 1952, it returned 18.3% with a standard deviation of 16.2%—performing better than in the more recent period. We do not have data on how VMOT might have performed before 1992, but based on results from the Ken French data, it probably would have performed about as well or possibly better than it did in the 1992 to 2017 backtest.&lt;/p&gt;

&lt;p&gt;Going forward, we cannot necessarily expect the same returns from a strategy like VMOT. RAFI predicts an average 2.8% return for the global stock market index over the next 10 years. This suggests a differential of 9.3% - 2.8% = 6.5% between historical nominal return and future real return.&lt;sup id=&quot;fnref:23&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:23&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;14&lt;/a&gt;&lt;/sup&gt; If we subtract 6.5% from the historical VMOT return, and subtract an additional 2% for fees and costs (which is the figure Alpha Architect uses), we get an 8.9% expected real return for high-conviction value and momentum strategies like VMOT. This is a rough estimate, not an accurate figure. Arguably we should assume value and momentum will not perform as well in the future as they have in the past, and apply an additional discount to this 8.9%. If we discount the excess return by half, we get a 5.9% expected return for VMOT. I will not apply this discount in my calculations, but we should be aware that doing so would change our ultimate estimate of how much leverage to use.&lt;/p&gt;

&lt;p&gt;As something of a corroboration, RAFI &lt;a href=&quot;https://interactive.researchaffiliates.com/smart-beta#!factors&quot;&gt;provides estimates&lt;/a&gt; of forward-looking five-year return for various long/short factors. At the time of this writing, it makes the following predictions for its long/short value and momentum factors (net of transaction costs):&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;5.7% for US large-cap value&lt;/li&gt;
  &lt;li&gt;1.1% for US large-cap momentum&lt;/li&gt;
  &lt;li&gt;8.0% for US small-cap value&lt;/li&gt;
  &lt;li&gt;6.4% for US small-cap momentum&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(RAFI’s projections for foreign developed market factors are similar but generally a bit higher.)&lt;/p&gt;

&lt;p&gt;A concentrated long-only portfolio on a particular factor would have approximately the same expected return as the long/short factor plus the broad market (although that’s not quite how the math works).&lt;/p&gt;

&lt;p&gt;The underlying indexes used by VMOT make some improvements on RAFI’s simple factor model (see &lt;a href=&quot;https://www.amazon.com/Quantitative-Value-Web-Site-Practitioners/dp/1118328078&quot;&gt;Quantitative Value&lt;/a&gt; and &lt;a href=&quot;https://www.amazon.com/Quantitative-Momentum-Practitioners-Momentum-Based-Selection/dp/111923719X&quot;&gt;Quantitative Momentum&lt;/a&gt; for details&lt;sup id=&quot;fnref:26&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:26&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;15&lt;/a&gt;&lt;/sup&gt;), so it might be reasonable to assume a higher expected return for VMOT. If we then subtract fees, we get something close to the original estimate I gave for VMOT (probably a bit higher&lt;sup id=&quot;fnref:27&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:27&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;16&lt;/a&gt;&lt;/sup&gt;).&lt;/p&gt;

&lt;p&gt;RAFI believes the value and momentum premia will work as well in the future as they have in the past, and some of the papers I linked &lt;a href=&quot;#improving-on-conventional-investing-wisdom&quot;&gt;above&lt;/a&gt; make similar claims. They offer good support for this claim, but in the interest of conservatism, we could justifiably subtract a couple of percentage points from expected return to account for premium degradation.&lt;/p&gt;

&lt;p&gt;Note that RAFI’s estimates use factor timing—attempting to guess how well factors will perform based on the current market environment, rather than just looking at historical behavior. This practice is not widely accepted; for example, see Asness et al.’s &lt;a href=&quot;https://www.aqr.com/Insights/Research/Journal-Article/Contrarian-Factor-Timing-is-Deceptively-Difficult&quot;&gt;Factor Timing is Deceptively Difficult&lt;/a&gt; (2017).
Also note that these numbers only give expected &lt;em&gt;mean&lt;/em&gt; return. Even if these estimates are accurate, we could still see much higher or lower returns due to market volatility.&lt;/p&gt;

&lt;h2 id=&quot;uncorrelated-returns&quot;&gt;Uncorrelated returns&lt;/h2&gt;

&lt;p&gt;As discussed &lt;a href=&quot;#donors-can-decrease-correlation&quot;&gt;above&lt;/a&gt;, altruists have a few options for seeking uncorrelated (or weakly-correlated) returns. Of the (investable) options discussed, managed futures appear best.&lt;/p&gt;

&lt;p&gt;The previously-discussed paper &lt;a href=&quot;https://www.trendfollowing.com/whitepaper/Century_Evidence_Trend_Following.pdf&quot;&gt;A Century of Evidence on Trend-Following Investing&lt;/a&gt; found that a diversified managed futures strategy returned 11.2% (nominal) with 9.7% standard deviation over the period 1880 to 2013, after adjusting for estimated fees and transaction costs. As I said above, I do not expect investors to realize returns this high in practice; I would expect a managed futures fund to underperform VMOT after fees and costs, but probably outperform the US stock market on a risk-adjusted basis over the next 10 years. I don’t know much about managed futures—I’ve read a few papers and done some analysis based on AQR’s &lt;a href=&quot;https://www.aqr.com/Insights/Datasets&quot;&gt;published data&lt;/a&gt;, and that’s my guess based on what I know.&lt;/p&gt;

&lt;p&gt;Chesapeake Capital’s managed futures fund (EQCHX), discussed previously, has &lt;a href=&quot;http://www.chesapeakecapital.com/performance/&quot;&gt;published performance data&lt;/a&gt; (net of fees) going back to 1988. From 1988 to 2020, it returned 9.8% (nominal) with a 19.2% standard deviation. If we subtract inflation and de-leverage to match the risk level of the AQR managed futures strategy, we come up with about a 4% real return with 11% standard deviation.&lt;/p&gt;

&lt;p&gt;As discussed in &lt;a href=&quot;https://www.aqr.com/Insights/Research/White-Papers/Trend-Following-in-Focus&quot;&gt;Trend Following in Focus&lt;/a&gt; (referenced previously), we have reason to expect managed futures to perform about as well going forward as they did in the past. We don’t know if the 31-year sample for EQCHX is representative of more long-term performance, but it gives a more conservative estimate than the AQR data, so we can use this as a rough projection of future performance. Performance could look dramatically different in the future, but this estimate makes about as much sense as any.&lt;/p&gt;

&lt;h2 id=&quot;long-run-market-return&quot;&gt;Long-run market return&lt;/h2&gt;

&lt;p&gt;Market-beating strategies such as value and momentum, and low-correlation, positive-return strategies such as managed futures, almost certainly will perform worse as time goes on. At present, it appears that only a small percentage of investors has high enough tolerance for &lt;a href=&quot;https://en.wikipedia.org/wiki/Tracking_error&quot;&gt;tracking error&lt;/a&gt; to invest in these sorts of strategies. But over time, these investors will become richer and eventually eliminate the market inefficiency.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://plato.stanford.edu/entries/ramsey-economics/&quot;&gt;Ramsey equation&lt;/a&gt; gives the long-run rate of return in an efficient market:&lt;/p&gt;

&lt;p&gt;\begin{align}
r = \delta + \eta g
\end{align}&lt;/p&gt;

&lt;p&gt;where:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;\(r\) = investment rate of return&lt;/li&gt;
  &lt;li&gt;\(\delta\) = pure time preference&lt;/li&gt;
  &lt;li&gt;\(\eta\) = relative risk aversion&lt;/li&gt;
  &lt;li&gt;\(g\) = economic growth rate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This equation holds because individuals discount future money at rate \(\delta + \eta g\), so they will require that investments return at least this much; and investments that return more than this will experience inflows until the expected return drops to \(r\).&lt;/p&gt;

&lt;p&gt;(If historical market returns roughly continue, we can probably expect equities markets to return 3-5% after inflation in the long run.)&lt;/p&gt;

&lt;p&gt;An investor maximizes long-run expected utility by, at each point in time, maximizing expected utility for the next point in time&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;17&lt;/a&gt;&lt;/sup&gt;; and maximizes expected utility at each time step by investing at the level given by the Samuelson share formula based on the expected return \(\mu_t\) and standard deviation \(\sigma_t\) at each time \(t\). Therefore, the long-run expected return does not affect what risk level investors should adopt today.&lt;/p&gt;

&lt;h2 id=&quot;caveats&quot;&gt;Caveats&lt;/h2&gt;

&lt;p&gt;Markets do not behave the way standard theoretical models suggest they do. This section discusses some deviations from theory and how that affects leverage.&lt;/p&gt;

&lt;h3 id=&quot;behavior-of-leveraged-investments-in-practice&quot;&gt;Behavior of leveraged investments in practice&lt;/h3&gt;

&lt;p&gt;Before using leverage, investors should understand how different implementations of leverage can behave in practice. Brian Tomasik’s &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/&quot;&gt;Should Altruists Leverage Investments?&lt;/a&gt; discusses this in detail, with extensive simulations; Colby Davis’s &lt;a href=&quot;https://rhsfinancial.com/2017/06/line-aggressive-crazy-leverage/&quot;&gt;The Line Between Aggressive and Crazy&lt;/a&gt; provides a more concise discussion of the most important points. Note that Davis’s article, although it does not say so explicitly, assumes investors have a logarithmic utility function; so some of its conclusions do not apply in the same way for other utility functions. For example, the formula he gives for optimal leverage is a special case of the Samuelson share formula with \(\eta = 1\).&lt;/p&gt;

&lt;h3 id=&quot;mean-reversion&quot;&gt;Mean reversion&lt;/h3&gt;

&lt;p&gt;If asset prices follow a random walk, increasing leverage proportionally increases expected value. Unfortunately, prices probably do not (entirely) follow a random walk.&lt;/p&gt;

&lt;p&gt;In the short run (on the timescale of days to weeks), stock prices tend to overreact and then mean revert&lt;sup id=&quot;fnref:18&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:18&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;18&lt;/a&gt;&lt;/sup&gt;. This essentially means that median outcomes happen more often, while high-variance outcomes (where the market goes up and then up again, or down and then down again) occur less often.&lt;/p&gt;

&lt;p&gt;As &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/#Leveraged_investing&quot;&gt;explained by Brian Tomasik&lt;/a&gt;, when asset returns are independent across time, leverage proportionally increases mean return, but less-than-proportionally increases (and possibly decreases) median return. Because short-term mean reversion increases the probability of median outcomes, it makes leverage look less appealing.&lt;/p&gt;

&lt;p&gt;Additionally, markets exhibit long-run mean reversion over the timescale of years: assets that have gone up (or down) a lot over the past three to five years tend to go down (or up) over the following year. Fortunately, we can adapt to this phenomenon by adjusting our return expectations based on asset valuations, which Research Affiliates does in its &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation&quot;&gt;estimates&lt;/a&gt; that I quoted previously. (Perhaps we could adapt to short-term mean reversion as well, but it would require updating return expectations on a frequent, e.g., daily, basis.)&lt;/p&gt;

&lt;p&gt;Assets exhibit medium-term momentum: price trends over the past 6-12 months tend to continue over the next 1-3 months. This works in favor of leverage by increasing the probability of high-variance outcomes while decreasing the frequency of median outcomes.&lt;/p&gt;

&lt;p&gt;In summary, there exist three well-established types of price trend: short-term mean reversion, medium-term momentum, and long-term mean reversion. The latter two can work to our advantage (or at least they don’t hurt us), while short-term mean reversion makes leverage look worse. How this affects the value of leverage overall warrants further investigation.&lt;/p&gt;

&lt;h3 id=&quot;left-skew-of-investment-returns&quot;&gt;Left skew of investment returns&lt;/h3&gt;

&lt;p&gt;So far, I have assumed that asset returns follow a log-normal distribution. Asset pricing models traditionally assume log-normal returns, so this assumption has well-established precedent. Unfortunately, it is false.&lt;/p&gt;

&lt;p&gt;In practice, asset returns skew to the left: highly negative returns occur more frequently than a log-normal model predicts. Any investor with a concave utility function (i.e., any reasonable investor) dislikes left skew: people disprefer bad outcomes more strongly than they prefer good outcomes. That means investors should take on less risk than log-normal models suggest.&lt;/p&gt;

&lt;h3 id=&quot;unpredictability-of-future-return&quot;&gt;Unpredictability of future return&lt;/h3&gt;

&lt;p&gt;The Samuelson share formula assumes we know the exact values of forward-looking expected return and volatility. But obviously we don’t. Getting too much leverage generally hurts more than not getting enough, so the more uncertain we are about these parameter values, the less leverage we should want to use (to avoid risking accidentally taking too much risk).&lt;/p&gt;

&lt;p&gt;We could account for this by treating mean return and standard deviation as distributions rather than point estimates, and calculating utility-maximizing leverage across the distribution instead of at a single point. This raises a further concern that we don’t even know what distribution the mean and standard deviation have, but at least this gets us closer to an accurate model.&lt;/p&gt;

&lt;p&gt;For more, see Chopra, V. K. and W. T. Ziemba (1993). “The effect of errors in mean, variance and co-variance estimates on optimal portfolio choice.”&lt;/p&gt;

&lt;h3 id=&quot;cost-of-leverage&quot;&gt;Cost of leverage&lt;/h3&gt;

&lt;p&gt;In theory, borrowing money for leverage costs only the risk-free rate. In practice, leverage costs more than that under most circumstances.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Lifecycle Investing&lt;/em&gt; recommends buying deep-in-the-money call options. Such options on highly liquid securities tend to cost about the risk-free rate, but options on less-traded securities will carry much higher implicit interest rates.&lt;/p&gt;

&lt;p&gt;Investing with margin allows for more flexibility. &lt;a href=&quot;https://interactivebrokers.com/en/index.php?f=1595&quot;&gt;Interactive Brokers charges&lt;/a&gt; the risk free rate plus between 0.3% and 1.5%, depending on the amount of margin used, with higher margin costing lower rates. Investors can use margin to buy any investable asset. Large institutions can potentially negotiate cheaper rates.&lt;/p&gt;

&lt;p&gt;Brian Tomasik gives &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/#Possible_leverage_methods&quot;&gt;a more thorough survey&lt;/a&gt; of types of leverage, along with their advantages and disadvantages.&lt;/p&gt;

&lt;p&gt;Higher costs make leverage look somewhat less attractive. We can roughly approximate the impact of a 1% cost over the risk-free rate by subtracting 1% from the expected return of an investment.&lt;sup id=&quot;fnref:17&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:17&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;19&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;h3 id=&quot;taxes&quot;&gt;Taxes&lt;/h3&gt;

&lt;p&gt;Investors who hold their money in taxable accounts should pay special attention to minimizing taxes. A portfolio that makes frequent trades will incur substantial performance drag due to taxes. (This applies to any investment, not just investment with leverage.)&lt;/p&gt;

&lt;p&gt;Taxes reduce expected return, but they also reduce volatility by the same amount. If investors can borrow money for free, they can increase leverage in proportion to the tax rate, which will increase the expected return and standard deviation of their assets by exactly enough to cancel out the tax burden. Unfortunately, investors cannot get free leverage, so taxes will hurt their risk-adjusted return.&lt;/p&gt;

&lt;p&gt;Ideally, altruists should invest in tax-sheltered accounts. Foundations can invest however they want, but small donors have more limited options. Donor-advised funds (DAFs) typically only allow a small set of investments. But some funds offer more flexibility in some cases: for example, Fidelity Charitable allows donors with at least $5 million to &lt;a href=&quot;https://www.fidelitycharitable.org/content/dam/fc-public/docs/programs/charitable-donorflex-program-overview.pdf&quot;&gt;manage their own investments&lt;/a&gt;. It also permits donors with $250,000 to &lt;a href=&quot;https://www.fidelitycharitable.org/giving-account/investment-options/charitable-investment-advisor-program.html&quot;&gt;nominate an investment advisor&lt;/a&gt; to manage their money. DAFs typically charge 0.6% per year, which substantially hurts your return if you keep money in them for a long time, but this might be preferable to paying taxes every year.&lt;/p&gt;

&lt;p&gt;Tax-sensitive investors should &lt;a href=&quot;https://www.etf.com/etf-education-center/etf-basics/why-are-etfs-so-tax-efficient&quot;&gt;prefer ETFs over mutual funds&lt;/a&gt; because they do not distribute taxable gains as often. In the long run, an ETF in a taxable account may incur lower costs than a DAF.&lt;/p&gt;

&lt;p&gt;This subject deserves much more discussion, and much has been written elsewhere on the subject. Altruists should consider other ways to minimize taxes or, if possible, to invest in tax-sheltered accounts.&lt;/p&gt;

&lt;h3 id=&quot;additional-caveats&quot;&gt;Additional caveats&lt;/h3&gt;

&lt;p&gt;Edward Thorp, &lt;a href=&quot;http://www.eecs.harvard.edu/cs286r/courses/fall12/papers/Thorpe_KellyCriterion2007.pdf&quot;&gt;The Kelly Criterion in Blackjack Sports Betting, and the Stock Market&lt;/a&gt; (p. 32), lists some additional false assumptions of this model:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Stock prices do not change continuously; portfolios can’t be adjusted continuously; transactions are not costless; the borrowing rate is greater than the T-bill rate; the after tax return, if different, needs to be used[.]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1 id=&quot;how-should-this-change-philanthropists-behavior&quot;&gt;How should this change philanthropists’ behavior?&lt;/h1&gt;

&lt;h2 id=&quot;reasonable-leverage-ratios-under-the-theoretical-model&quot;&gt;Reasonable leverage ratios (under the theoretical model)&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This section ignores all the &lt;a href=&quot;#caveats&quot;&gt;caveats&lt;/a&gt; presented above.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So far, we have arrived at the following conclusions about altruists’ risk preferences:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Uncorrelated small donors experience nearly linear marginal utility of money, regardless of their preferred cause.&lt;/li&gt;
  &lt;li&gt;Interventions that straightforwardly benefit their recipients, including typical global poverty and farm animal welfare interventions, probably have nearly linear marginal utility; but marginal utility might diminish more rapidly if the best interventions dry up quickly.&lt;/li&gt;
  &lt;li&gt;Research-based cause areas, including existential risk, probably experience roughly logarithmic marginal utility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Philanthropists with near-zero risk aversion theoretically maximize the utility of their investments by taking on as much leverage as they feasibly can.&lt;/p&gt;

&lt;p&gt;For philanthropists with logarithmic utility—likely including large donors and correlated small donors—the optimal Samuelson share depends on investment return expectations. This table gives Samuelson shares for various investment portfolios, given the return expectations laid out &lt;a href=&quot;#return-expectations&quot;&gt;previously&lt;/a&gt;. The fourth column \(S\) gives desired leverage according to the Samuelson share formula, and the fifth column \(S \cdot \mu\) gives expected return after applying leverage. This assumes the real risk-free rate equals 0% (which is approximately true as of this writing).&lt;sup id=&quot;fnref:28&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:28&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;20&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Be aware that these estimates depend on assumptions about return expectations that may be false. Furthermore, the estimated optimal leverage only holds under the theoretical model described previously. Although this model is commonly used to approximate investment behavior, it does not accurately reflect real-world trading, for reasons listed in &lt;a href=&quot;#caveats&quot;&gt;Caveats&lt;/a&gt; as well as other reasons.&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Investment&lt;/th&gt;
      &lt;th&gt;\(\mu\)&lt;/th&gt;
      &lt;th&gt;\(\sigma\)&lt;/th&gt;
      &lt;th&gt;\(S\)&lt;/th&gt;
      &lt;th&gt;\(S \cdot \mu\)&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;diversified global index, low-vol&lt;/td&gt;
      &lt;td&gt;1.6%&lt;/td&gt;
      &lt;td&gt;4%&lt;/td&gt;
      &lt;td&gt;10.2&lt;/td&gt;
      &lt;td&gt;16%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;diversified global index, high-vol&lt;/td&gt;
      &lt;td&gt;4.5%&lt;/td&gt;
      &lt;td&gt;12%&lt;/td&gt;
      &lt;td&gt;3.4&lt;/td&gt;
      &lt;td&gt;15%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;value/momentum/trend (VMOT)&lt;/td&gt;
      &lt;td&gt;9%&lt;/td&gt;
      &lt;td&gt;13%&lt;/td&gt;
      &lt;td&gt;6.1&lt;/td&gt;
      &lt;td&gt;55%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;managed futures (MF)&lt;/td&gt;
      &lt;td&gt;4%&lt;/td&gt;
      &lt;td&gt;11%&lt;/td&gt;
      &lt;td&gt;3.5&lt;/td&gt;
      &lt;td&gt;14%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;VMOT + MF&lt;/td&gt;
      &lt;td&gt;7%&lt;/td&gt;
      &lt;td&gt;10%&lt;/td&gt;
      &lt;td&gt;7.8&lt;/td&gt;
      &lt;td&gt;54%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;These estimates assume logarithmic utility. If altruists have higher or lower \(\eta\), they should prefer lower or higher amounts of leverage (respectively).&lt;/p&gt;

&lt;h2 id=&quot;deviations-from-the-theoretical-model&quot;&gt;Deviations from the theoretical model&lt;/h2&gt;

&lt;p&gt;On the other hand, a number of caveats reduce how much leverage altruists may want to use. These include market behaviors such as &lt;a href=&quot;#mean-reversion&quot;&gt;mean reversion&lt;/a&gt;, &lt;a href=&quot;#left-skew-of-investment-returns&quot;&gt;left skew of investment returns&lt;/a&gt;, and the &lt;a href=&quot;#unpredictability-of-future-return&quot;&gt;unpredictability of future return&lt;/a&gt;; as well as practical considerations such as &lt;a href=&quot;#cost-of-leverage&quot;&gt;the cost of leverage&lt;/a&gt; and &lt;a href=&quot;#taxes&quot;&gt;taxes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After accounting for these caveats, large philanthropists or correlated small donors should probably take on more leverage than most individual investors, but substantially less than the numbers given in the previous section. Some have proposed using “half Kelly” instead of the Kelly criterion,&lt;sup id=&quot;fnref:22&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:22&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;21&lt;/a&gt;&lt;/sup&gt; where for our purposes, the Kelly criterion corresponds to \(\eta = 1\) and half Kelly means \(\eta = 2\). This might roughly account for the various factors that make leverage look less appealing. Or we could look at how historically successful investors have calibrated their risk. For example, Warren Buffett appears to use something similar to the full Kelly criterion, as do many other successful investors.&lt;sup id=&quot;fnref:24&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:24&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;22&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Doubling \(\eta\) results in halving \(S\) according to the Samuelson share formula. Using the same return estimates as before, this gives \(S = 1.7\) for the diversified global index (high-vol) and \(S = 4.2\) for VMOT + MF.&lt;/p&gt;

&lt;p&gt;Pushing in the other direction, recall that we made two arguments for increasing leverage even further:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Most altruists probably do not take on enough risk, so investment-minded philanthropists should increase leverage to compensate.&lt;/li&gt;
  &lt;li&gt;According to the principle of time diversification, philanthropists should increase leverage now if they expect their preferred cause area(s) to have greater access to funding in the future.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first factor weighs overwhelmingly in favor of using leverage for small donors, although not necessarily for large donors (because large donors’ choices more heavily affect the overall altruistic investment pool). The significance of the second factor depends on one’s expectations about future movement growth.&lt;/p&gt;

&lt;p&gt;We can never definitively determine the correct amount of leverage to take, and any attempt to model the expected utility of an investment will have many shortcomings. But investors still need to pick a number, even if that number is 1:1 (i.e., no leverage). I do not claim that the leverage ratios offered in this section are optimal—they’re just reasonable guesses based on what I laid out in the previous sections of this essay. Taking too much leverage is worse than not taking enough, so I personally would probably not use this much leverage in practice.&lt;/p&gt;

&lt;h2 id=&quot;good-ventures--open-philanthropy-project&quot;&gt;Good Ventures / Open Philanthropy Project&lt;/h2&gt;

&lt;p&gt;According to many effective altruists’ value systems, most value-aligned money resides with Good Ventures, the foundation that funds the Open Philanthropy Project. Donors should care about how Good Ventures invests its money and about how to adapt their own investments to diversify against Good Ventures. Unfortunately, I do not know how Good Ventures invests, so I have nothing to say about this other than that it matters a lot.&lt;/p&gt;

&lt;p&gt;If Good Ventures invests a large portion of its assets in Facebook and does not hedge this investment (which it may or may not do, I don’t know), other philanthropists should pay particular attention to diversifying against Facebook.&lt;/p&gt;

&lt;h2 id=&quot;implementation-details&quot;&gt;Implementation details&lt;/h2&gt;

&lt;p&gt;I am not an investment professional and do not have sufficient expertise to make a recommendation about how to invest. For informational purposes, I will propose what I believe to be a reasonable and achievable investment plan based on the reasoning laid out in this essay, but with the caveat that a thoughtful investor could likely find a better implementation. This should be considered a best guess, not an endorsement.&lt;/p&gt;

&lt;p&gt;First, invest with Interactive Brokers because it offers the cheapest margin rates (at least in the United States&lt;sup id=&quot;fnref:21&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:21&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;23&lt;/a&gt;&lt;/sup&gt;). Create an altruistic investment account that’s separate from your personal account, because personal accounts should take much less risk.&lt;/p&gt;

&lt;p&gt;Second, invest all your altruistic funds into a managed futures fund. As discussed &lt;a href=&quot;#donors-can-decrease-correlation&quot;&gt;above&lt;/a&gt;, philanthropists should care greatly about reducing correlation with other investors; and managed futures look like a particularly promising way of doing that.&lt;/p&gt;

&lt;p&gt;I spoke to an investment advisor with knowledge of managed futures, and he suggested &lt;a href=&quot;https://funds.aqr.com/funds/aqr-managed-futures-strategy-hv-fund&quot;&gt;AQR Managed Futures Strategy HV Fund&lt;/a&gt; (QMHIX) as a good choice. This fund targets high (15%) volatility, which has the dual benefit that (1) investors with high risk tolerance can get higher return with less leverage and (2) the fund produces greater expected return relative to fees. QMHIX is not available to all investors; as an alternative,
&lt;a href=&quot;https://www.axsinvestments.com/axs-chesapeake-strategy-fund/&quot;&gt;AXS Chesapeake Strategy Fund&lt;/a&gt; (EQCHX)&lt;sup id=&quot;fnref:3:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt; has similar methodology, as well as relatively low fees for a managed futures fund.&lt;/p&gt;

&lt;p&gt;Third, use as much margin as Interactive Brokers will allow without substantially risking a margin call. Investors using &lt;a href=&quot;https://en.wikipedia.org/wiki/Regulation_T&quot;&gt;Reg T margin&lt;/a&gt; cannot get anywhere close to as much leverage as is theoretically optimal according to the analysis above, so they should get as much as they can. (At least small donors should, to compensate for other donors’ lack of leverage; large philanthropists may want to take less than 2:1 leverage (but perhaps should take more than 2:1; I am uncertain about this).)&lt;/p&gt;

&lt;h3 id=&quot;alternative-investment-strategies&quot;&gt;Alternative investment strategies&lt;/h3&gt;

&lt;p&gt;Another similar investment idea: Instead of buying a managed futures fund, buy value and momentum funds while shorting the broad market to produce net zero stock exposure, and then apply lots of leverage. This probably requires &lt;a href=&quot;https://en.wikipedia.org/wiki/Portfolio_margin&quot;&gt;portfolio margin&lt;/a&gt; rather than Reg T margin, which has certain qualification requirements, so not as many investors will be able to implement this strategy. Some examples of high-conviction value ETFs: &lt;a href=&quot;https://etfsite.alphaarchitect.com/ival/&quot;&gt;IVAL&lt;/a&gt;, &lt;a href=&quot;https://etfsite.alphaarchitect.com/qval/&quot;&gt;QVAL&lt;/a&gt;, &lt;a href=&quot;https://www.cambriafunds.com/gval&quot;&gt;GVAL&lt;/a&gt;, &lt;a href=&quot;https://www.cambriafunds.com/syld&quot;&gt;SYLD&lt;/a&gt;, &lt;a href=&quot;https://www.cambriafunds.com/fyld&quot;&gt;FYLD&lt;/a&gt;, &lt;a href=&quot;https://www.cambriafunds.com/eyld&quot;&gt;EYLD&lt;/a&gt;. And some momentum ETFs: &lt;a href=&quot;https://etfsite.alphaarchitect.com/imom/&quot;&gt;IMOM&lt;/a&gt;, &lt;a href=&quot;https://etfsite.alphaarchitect.com/qmom/&quot;&gt;QMOM&lt;/a&gt;, &lt;a href=&quot;https://www.cambriafunds.com/gmom&quot;&gt;GMOM&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As a third option, philanthropists could invest in more liquid assets (which would have worse risk-adjusted return) and then use options to get much more leverage. I am inclined to believe that this does not make as much sense. For philanthropists with logarithmic risk aversion, this strategy may result in lower expected utility. Even for those with near-linear marginal utility, using high leverage (say, 10:1 or higher) may impose sufficiently high costs that the resulting investment portfolio will have negative expected return.&lt;/p&gt;

&lt;h3 id=&quot;investor-psychology&quot;&gt;Investor psychology&lt;/h3&gt;

&lt;p&gt;Philanthropists with theoretically high risk tolerance should consider their psychological reaction to pursuing risky and uncorrelated strategies. How will you react if:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;you invest your philanthropic money in a risky strategy and lose 90% or more of your money?&lt;/li&gt;
  &lt;li&gt;you invest in an uncorrelated strategy, and slowly lose money over the course of 5-10 years while most people you know are making money?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Whether a strategy is optimal in theory doesn’t matter if investors can’t follow through with it in practice. I personally do not invest in the manner I described in the previous section; I use something like a combination of that strategy and a more traditional investment portfolio.&lt;/p&gt;

&lt;h3 id=&quot;large-investors&quot;&gt;Large investors&lt;/h3&gt;

&lt;p&gt;Sufficiently large investors can non-trivially shift the altruistic money pool by themselves. Rather than investing exclusively in assets with low correlation to other altruists, they should build a well-diversified portfolio with a tilt toward under-weighted asset classes.&lt;/p&gt;

&lt;p&gt;Large institutions or ultra high-net-worth individuals most likely can implement a portfolio more effectively than individual investors. They might be able to:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;access some types of investments that individual investors cannot (such as &lt;a href=&quot;#longshort-factor-premia&quot;&gt;long/short factor funds&lt;/a&gt;);&lt;/li&gt;
  &lt;li&gt;hire investment firms to create customized products;&lt;/li&gt;
  &lt;li&gt;negotiate better margin rates;&lt;/li&gt;
  &lt;li&gt;use higher leverage ratios.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Wealthy philanthropists or institutions with relatively long time horizons should consider how best to use these advantages.&lt;/p&gt;

&lt;p&gt;I would suggest that investors who want additional guidance speak to &lt;a href=&quot;https://alphaarchitect.com/&quot;&gt;Alpha Architect&lt;/a&gt;. It is an investment management firm that produces high-quality research and investment products (hence why I have cited it repeatedly), and offers custom solutions for large investors. Perhaps more importantly, I believe the owners of the firm genuinely want to help people invest well—an all-too-rare trait among investment managers. The people at Alpha Architect know far more than I do about investing, and likely have better ideas about how to invest in unusual situations.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;
&lt;h2 id=&quot;confidences&quot;&gt;Confidences&lt;/h2&gt;

&lt;p&gt;I will conclude by restating the main claims of this essay, with corresponding &lt;a href=&quot;/confidence_tags&quot;&gt;confidence levels&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;on-risk-and-leverage&quot;&gt;On risk and leverage&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Highly likely:&lt;/strong&gt; More altruists should use leverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highly likely:&lt;/strong&gt; Altruists should consider their own psychology when deciding how to invest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Likely:&lt;/strong&gt; Most altruists should have higher risk tolerance than typical self-interested investors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Likely:&lt;/strong&gt; The best way to get leverage for most investors is with an Interactive Brokers margin account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible:&lt;/strong&gt; Many altruists should use as much leverage as they can reasonably manage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible:&lt;/strong&gt; For most altruists, \(0 &amp;lt; \eta \le 1\).&lt;/p&gt;

&lt;h3 id=&quot;on-asset-allocation&quot;&gt;On asset allocation&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Highly likely:&lt;/strong&gt; More altruists should tilt their investments toward value and momentum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highly likely:&lt;/strong&gt; Altruists should pay particular attention to reducing correlation with other altruists’ investments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Highly likely:&lt;/strong&gt; There exists a better investment strategy than the one I proposed in &lt;a href=&quot;#implementation-details&quot;&gt;Implementation details&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Likely:&lt;/strong&gt; Altruists should overwhelmingly favor assets with low correlation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Likely:&lt;/strong&gt; Altruists should specifically invest in managed futures as a way of reducing correlation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible:&lt;/strong&gt; Altruists should invest their entire (altruistic) portfolios into managed futures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible:&lt;/strong&gt; Altruists should specifically invest in any of the funds that I named.&lt;/p&gt;

&lt;h2 id=&quot;questions-for-future-consideration&quot;&gt;Questions for future consideration&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Can we build better models for the utility functions of various causes?&lt;/li&gt;
  &lt;li&gt;What if we use a utility function with non-constant RRA?&lt;/li&gt;
  &lt;li&gt;How do effective altruists as a whole currently allocate their investments across asset classes? Which asset classes do altruists under-weight most severely?&lt;/li&gt;
  &lt;li&gt;How exactly do deviations from the asset pricing model (including left skew and short-term mean reversion) affect desired leverage?&lt;/li&gt;
  &lt;li&gt;How should one invest if their investment correlation to other altruists lies between 0 and 1?&lt;/li&gt;
  &lt;li&gt;How should future donations be treated with respect to time diversification? How does the answer vary by cause area?&lt;/li&gt;
  &lt;li&gt;Should historical donations factor into the utility function? It seems to me that we should count historical good done as part of total utility, but the literature I have read on utility of consumption generally does not do this. Why not?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;acknowledgements&quot;&gt;Acknowledgements&lt;/h2&gt;

&lt;p&gt;Thanks to Brian Tomasik, Jake McKinnon, Kit Harris, Linda Neavel Dickens, and Michael St. Jules for providing me with valuable feedback. Reviewers do not necessarily endorse any of the claims made in this essay.&lt;/p&gt;

&lt;h2 id=&quot;changelog&quot;&gt;Changelog&lt;/h2&gt;

&lt;p&gt;2020-05-17:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Update managed futures return expectations to incorporate actual historical performance data from managed futures fund EQCHX.&lt;/li&gt;
  &lt;li&gt;Add a backtest going back to 1952 of a strategy similar to VMOT.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2020-04-29:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Add additional disclaimers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2020-04-18:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Provide more details and references on managed futures.&lt;/li&gt;
  &lt;li&gt;Add section on long/short factor strategies.&lt;/li&gt;
  &lt;li&gt;Miscellaneous small improvements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2020-01-08:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Add more references to papers on value and momentum that provide further evidence that they work.&lt;/li&gt;
&lt;/ul&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:15&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For a short summary of empirical research on people’s risk preferences, see &lt;a href=&quot;https://www.aacalc.com/docs/relative_risk_aversion&quot;&gt;Estimating the Coefficient of Relative Risk Aversion for Consumption&lt;/a&gt;. &lt;a href=&quot;#fnref:15&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:15:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Domian, Racine, and Wilson (2003). &lt;a href=&quot;http://www.fsa.ulaval.ca/nfa2003/papiers/Craig%20Wilson.pdf&quot;&gt;Leveraged Stock Portfolios over Long Holding Periods: A Continuous Time Model.&lt;/a&gt; &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Merton (1969). &lt;a href=&quot;http://lifecycleinvesting.net/Resources/merton%20lifetime%20portfolio%20selection%201969.pdf&quot;&gt;Lifetime Portfolio Selection Under Uncertainty: The Continuous-Time Case.&lt;/a&gt; &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:16&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Aside: I attempted to write a Python program to determine the marginal utility of cash transfers and at what point risk aversion starts to meaningfully increase. But the program didn’t work very well because, given the way I wrote it, NumPy had to struggle to compute triple-nested integrals far out on the tails of fat-tailed probability distributions. But I only spent a couple of hours on it, so I think such a program should be writable given a more careful approach. &lt;a href=&quot;#fnref:16&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I did not look deeply into this, but GiveDirectly’s staff has not expanded as rapidly as its increase in donations over the past few years, so this statement appears to be true. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;GiveWell gave substantially lower numbers &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1XIBtD1aOKKkVKl9eFB_L5OF9zobONtmbg4bIXvKI86k/edit#gid=1034883018&quot;&gt;in 2018&lt;/a&gt;, ranging from 4.2 to 12.2. I have not investigated the reasons for the difference. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Derivation: Marginal utility of the \(m^{th}\) dollar is \(b\) times greater than the marginal utility of the \((m + m&apos;)^{th}\) dollar. The standard &lt;a href=&quot;https://en.wikipedia.org/wiki/Isoelastic_utility&quot;&gt;isoelastic utility function&lt;/a&gt; is defined as&lt;/p&gt;

      &lt;p&gt;\begin{align}
\displaystyle \begin{cases}{\frac {c^{1-\eta }-1}{1-\eta }} &amp;amp; \eta \neq 1
\\
\ln(c) &amp;amp; \eta =1\end{cases}
\end{align}&lt;/p&gt;

      &lt;p&gt;and its derivative is \(c^{-\eta}\). The ratio of marginal utilities between \(m\) and \(m + m&apos;\) should equal \(b\):&lt;/p&gt;

      &lt;p&gt;\begin{align}
\displaystyle\frac{m^{-\eta}}{(m + m’)^{-\eta}} = b
\end{align}&lt;/p&gt;

      &lt;p&gt;Solving this equation for \(\eta\) gives the desired result. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I estimated this based on a range of typical net worths and relative risk aversions. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:20&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This assumes that individuals can treat their future earnings as risk-free. This might not be the case, depending how much volatility one expects in their future career. &lt;em&gt;Lifecycle Investing&lt;/em&gt; discusses this more, as well as Milevsky (2012), &lt;a href=&quot;https://www.amazon.com/Are-You-Stock-Bond-Financial/dp/0133115291&quot;&gt;Are You a Stock or a Bond?&lt;/a&gt; &lt;a href=&quot;#fnref:20&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Philanthropists are over-represented in the US, and people in every country over-represent their own country, so it is almost certainly the case that philanthropists over-represent the S&amp;amp;P 500, although obviously they don’t all put all their money in it.&lt;/p&gt;

      &lt;p&gt;I did a small informal Facebook poll that supports this claim. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Disclaimer: At the time of this writing, I have money invested in this fund. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:3:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:29&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Methodology:&lt;/p&gt;

      &lt;ol&gt;
        &lt;li&gt;Take the top decile by momentum and the top decile by E/P.&lt;/li&gt;
        &lt;li&gt;Create a portfolio weighted 50/50 between these two, rebalanced monthly.&lt;/li&gt;
        &lt;li&gt;Use a combination of the 12-month simple moving average and 12-month time series momentum to risk manage the portfolio, &lt;a href=&quot;https://alphaarchitect.com/2014/12/02/the-robust-asset-allocation-raa-index/&quot;&gt;as described by Alpha Architect&lt;/a&gt;.&lt;/li&gt;
      &lt;/ol&gt;
      &lt;p&gt;&lt;a href=&quot;#fnref:29&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:30&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;For comparison, the strategy I tested returned 14.4% with standard deviation 16.0% from 1992 to 2017, hypothetically underperforming VMOT by 5 percentage points on a risk-adjusted basis. VMOT makes a number of methodological improvements over the basic strategy I tested, the biggest differences being (1) diversifying internationally, (2) using a value metric that better captures company valuation, and (3) using a more robust risk management methodology. &lt;a href=&quot;#fnref:30&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:23&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Does it make sense to treat market factors as additive like this? Yes. The academic literature on finance generally treats factors as additive: see Fama and French (1992), &lt;a href=&quot;http://www.business.unr.edu/faculty/liuc/files/badm742/fama_french_1992.pdf&quot;&gt;The Cross-Section of Expected Stock Returns&lt;/a&gt;. &lt;a href=&quot;#fnref:23&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:26&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Any deviations from a well-established investing strategy should raise concerns about data mining. The two books explain why they believe their improvements are not data-mined. They also discuss and reject some alternative potential improvements because they suspect the strategies are data-mined. &lt;a href=&quot;#fnref:26&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:27&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Based on my coarse estimate that VMOT has two percentage points higher expected return than the average of RAFI’s eight US+developed large+small value+momentum factors plus market beta, and subtracting VMOT’s fee, we get a 10-11% expected real return. &lt;a href=&quot;#fnref:27&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Proof:&lt;/p&gt;

      &lt;ul&gt;
        &lt;li&gt;Characterize investing as a discrete series of time steps from 0 to \(\tau\), after which the investor stops investing.&lt;/li&gt;
        &lt;li&gt;Each time step has an investment return random variable \(r_t\) following a normal distribution parameterized by \(\mu_t\) and \(\sigma_t\).&lt;/li&gt;
        &lt;li&gt;\(u(x)\) is the isoelastic utility function, parameterized by some constant \(\eta\).&lt;/li&gt;
        &lt;li&gt;\(S_t\) gives Samuelson share at time \(t\) (&amp;gt;1 indicates leverage).&lt;/li&gt;
        &lt;li&gt;\(x_0\) gives the amount of starting capital.&lt;/li&gt;
      &lt;/ul&gt;

      &lt;p&gt;We wish to maximize expected utility with respect to \(S_t\):&lt;/p&gt;

      &lt;p&gt;\begin{align}
E[u(x_0 \exp(\displaystyle\sum_1^\tau r_t S_t))]
\end{align}&lt;/p&gt;

      &lt;p&gt;\(u(x)\) is normally defined as \(\displaystyle\frac{x^{1-\eta} - 1}{1 - \eta}\). Utility functions are equivalent up to affine transformations, so we can simplify this to \(x^{1-\eta}\) (or \(\log(x)\) in the case that \(\eta\) = 1). For the same reason, we can omit the \(x_0\) term. Furthermore, this means the maximum of \(S_t\) does not depend on past return, only on the expected distribution of future return.&lt;/p&gt;

      &lt;p&gt;First, consider the case with only two time steps. Call the leverage-adjusted return in each time step \(x\) and \(y\).&lt;/p&gt;

      &lt;p&gt;\begin{align}
u(e^{x + y}) =
\displaystyle \begin{cases}
(e^{x + y})^{1-\eta} = e^{x (1 - \eta)} e^{y (1 - \eta)} &amp;amp; \eta \neq 1
\\
\ln(e^{x + y}) = x + y &amp;amp; \eta = 1
\end{cases}
\end{align}&lt;/p&gt;

      &lt;p&gt;Thus we have:&lt;/p&gt;

      &lt;p&gt;\begin{align}
\displaystyle\begin{cases}
u(e^{x + y}) = u(e^x) u(e^y) &amp;amp; \eta \neq 1 \\ u(e^{x + y}) = x + y &amp;amp; \eta = 1
\end{cases}
\end{align}&lt;/p&gt;

      &lt;p&gt;Using the rule determined in the \(\eta \ne 1\) case, as well as the fact that \(E[X \cdot Y] = E[X] \cdot E[Y]\) when \(X\) and \(Y\) are independent, we can modify the expected value formula as follows:&lt;/p&gt;

      &lt;p&gt;\begin{align}
E[u(\exp(\displaystyle\sum_1^\tau r_t S_t))]
= E[\displaystyle\prod_1^\tau u(e^{r_t S_t})]
= \displaystyle\prod_1^\tau E[u(e^{r_t S_t})]
\end{align}&lt;/p&gt;

      &lt;p&gt;This product is maximized with respect to \(S_t\) by maximizing each term of the product.&lt;/p&gt;

      &lt;p&gt;In the \(\eta = 1\) case, \(E[u(e^{r_1 S(1) + ... + r_\tau S_\tau})] = E[r_1 S(1) + ... + r_\tau S_\tau]\). By linearity of expected value, this equals \(E[r_1 S(1)] + ... + E[r_\tau S_\tau]\). This sum is maximized with respect to \(S_t\) by maximizing each integral.&lt;/p&gt;

      &lt;p&gt;Thus, for any value of \(\eta\), we maximize long-run expected utility by independently maximizing expected utility at each time step.&lt;/p&gt;

      &lt;p&gt;Thanks to Michael St. Jules for providing feedback on how to substantially simplify this proof. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:18&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;De Bondt and Thaler (1985). &lt;a href=&quot;https://www.jstor.org/stable/2327804&quot;&gt;Does the Stock Market Overreact?&lt;/a&gt; &lt;a href=&quot;#fnref:18&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:17&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This underestimates the cost of leverage because on a left-skewed distribution, subtracting a constant across multiple time periods reduces the mean by more than that constant. And it overestimates the cost of leverage because fees reduce the volatility of an investment. Whether it under- or over-estimates on net depends on the specific properties of the investment. &lt;a href=&quot;#fnref:17&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:28&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Correction 2020-01-20: Originally, I calculated \(S\) as \(\frac{\mu}{\sigma^2 \eta}\), but technically this is not correct.&lt;/p&gt;

      &lt;p&gt;Let \(\mu\) and \(\sigma\) be the mean and standard deviation of the asset, and let \(\mu_S\) and \(\sigma_S\) be the parameters used in the Samuelson share formula.&lt;/p&gt;

      &lt;p&gt;The \(\mu_S\) parameter in the Samuelson share formula should be the log of the mean, and the \(\sigma_S\) parameter should be the standard deviation of the log of the return (where the log of the return is normally distributed). That is:&lt;/p&gt;

      &lt;p&gt;\begin{align}
\mu_S = \log(1 + \mu)
\end{align}
\begin{align}
\sigma_S = \sqrt{\log\left(\displaystyle\frac{\sigma^2}{(1 + \mu)^2} + 1\right)}
\end{align}&lt;/p&gt;

      &lt;p&gt;\(\mu_S\) and \(\sigma_S\) appproximately equal \(\mu\) and \(\sigma\), so simply using \(\mu_S = \mu\) and \(\sigma_S = \sigma\) is not a bad approximation, but it’s not quite accurate.&lt;/p&gt;

      &lt;p&gt;Thanks to Gordon Irlam for providing this correction. &lt;a href=&quot;#fnref:28&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:22&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;MacLean, Thorp, and Ziemba (2010). &lt;a href=&quot;https://www.stat.berkeley.edu/~aldous/157/Papers/Good_Bad_Kelly.pdf&quot;&gt;Good and bad properties of the Kelly criterion.&lt;/a&gt; &lt;a href=&quot;#fnref:22&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:24&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;MacLean, Thorp, and Ziemba (2010). The Kelly Capital Growth Investment Criterion: Theory and Practice. Part VI: Evidence of the Use of Kelly Type Strategies by the Great Investors and Others. &lt;a href=&quot;#fnref:24&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:21&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Interactive Brokers operates in other countries, but I do not know if other countries have better brokerage firms. &lt;a href=&quot;#fnref:21&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>Correction on Giving Now vs. Later</title>
				<pubDate>Sat, 04 Jan 2020 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2020/01/04/correction_giving_now_vs_later/</link>
				<guid isPermaLink="true">http://mdickens.me/2020/01/04/correction_giving_now_vs_later/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;In an &lt;a href=&quot;https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf&quot;&gt;early draft of a paper&lt;/a&gt;, Philip Trammell points out two mistakes in &lt;a href=&quot;/2019/01/21/should_global_poverty_donors_give_now_or_later/&quot;&gt;my essay&lt;/a&gt; on giving now vs. later. And since writing it, I have identified a third mistake.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;The first mistake: My model assumed that utility is logarithmic with income. Empirical research suggests that utility may be sub-logarithmic with income, which my model does not allow for. Accounting for this would make giving now look relatively better. (I do not think it is &lt;em&gt;clearly&lt;/em&gt; true that utility is sub-logarithmic with income, but it could be, and it would be better to account for the possibility.)&lt;/p&gt;

&lt;p&gt;The second mistake: My model compared the present-day discount rate with the present-day investment rate, but this is not the correct comparison. To see why, consider the possibility that the discount rate currently exceeds the investment rate, but that the discount rate is dropping (because good giving opportunities are drying up), and at some future time &lt;code&gt;t&lt;/code&gt;, the investment rate will surpass and then permanently exceed the discount rate.&lt;/p&gt;

&lt;p&gt;In this scenario, you will do more good by donating all your money today than by waiting until time &lt;code&gt;t&lt;/code&gt; to donate. But if you continue investing for long enough after time &lt;code&gt;t&lt;/code&gt;, the discounted present value of your donation will eventually surpass the value of donating today. Therefore, in this scenario you should give later, no matter how high the present-day discount rate may be.&lt;/p&gt;

&lt;p&gt;Trammell discusses this scenario, including why the investment rate will eventually exceed the discount rate, in &lt;a href=&quot;https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf&quot;&gt;his paper&lt;/a&gt;. The paper is a draft, but as of this writing, the relevant discussion occurs in section 5.1. He discusses the argument at a high level in &lt;a href=&quot;https://forum.effectivealtruism.org/posts/m85Ecd8QFE7muvojj/rptp-is-a-strong-reason-to-consider-giving-later&quot;&gt;RPTP Is a Strong Reason to Consider Giving Later&lt;/a&gt; on the Effective Altruism Forum (and probably explains it better than I did).&lt;/p&gt;

&lt;p&gt;The third mistake: My model ignored risk. We can only directly compare the investment rate &lt;code&gt;r&lt;/code&gt; with the growth rate &lt;code&gt;g&lt;/code&gt; using a simple inequality (namely, &lt;code&gt;r &amp;gt; g&lt;/code&gt;) if &lt;code&gt;r&lt;/code&gt; and &lt;code&gt;g&lt;/code&gt; are perfectly correlated. If &lt;code&gt;r&lt;/code&gt; and &lt;code&gt;g&lt;/code&gt; move in lock step, our utility function over future spending can be simplified to directly compare &lt;code&gt;r&lt;/code&gt; and &lt;code&gt;g&lt;/code&gt;. But my model introduced modifications that could disrupt the correlation. It added discounts to global poverty interventions that don’t directly depend on the consumption level &lt;code&gt;g&lt;/code&gt;, and it introduced factors such as valuation into the investment rate of return. These additional factors violate the assumption that the investment rate and discount rate have perfect correlation.&lt;/p&gt;

&lt;p&gt;To account for risk, we cannot simply add up all the terms on each side. Instead, we need to do the hard(er) work of calculating the discounted expected utility of giving later relative to the utility of giving now.&lt;/p&gt;

&lt;p&gt;Fixing these problems requires making substantial modifications to my model. Trammell’s paper covers the subject much more effectively than my essay did, so rather than updating my model, I will defer to his.&lt;/p&gt;

&lt;p&gt;It is worth emphasizing that Trammell’s paper is still a draft—it contains some missing sections and may substantially change before publication—but I believe his general approach works much better than the one in my essay.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Are All Actions Impermissible Under Kantian Deontology?</title>
				<pubDate>Mon, 30 Dec 2019 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2019/12/30/are_all_acions_impermissible_under_kantian_deontology/</link>
				<guid isPermaLink="true">http://mdickens.me/2019/12/30/are_all_acions_impermissible_under_kantian_deontology/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;em&gt;Epistemic status: I don’t really understand Kantian deontology.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; According to the rules of Kantian deontology, an action must be impermissible if it has any probability of resulting in an impermissible outcome. But all actions have some probability of resulting in such an outcome. Therefore, all actions are impermissible.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;The classic footbridge dilemma, a variation on the trolley problem:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;You see a trolley running down a track that has five people working on it. If the trolley hits them, it will kill them. You are standing on a footbridge overlooking the track, and a fat man stands next to you. If you push the fat man off the footbridge, then he will get crushed by the trolley, but his death will save the lives of the other five people. Should you push him?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A Kantian deontologist would say no. According to Kant, you must treat people as ends in themselves, not merely as means; so you should not use the man to save the other five. (For our purposes, this is equivalent to the intuitionist moral claim that killing is categorically wrong. This essay somewhat vacillates between describing Kantian and intuitionist deontology.)&lt;/p&gt;

&lt;p&gt;Now consider an alternative, what we can call the footbridge phone booth dilemma:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;You see a trolley running down a track that has five people working on it. If the trolley hits them, it will kill them. You are standing on a footbridge overlooking the track, and an opaque phone booth stands next to you. The phone booth may be out of commission, in which case it contains some concrete weighing it down; otherwise, there is a person inside the phone booth. If you push the phone booth off the footbridge and it contains either concrete or a person, it will stop the trolley, but anyone inside the phone booth will die&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;. Should you do it?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this case, you &lt;em&gt;might&lt;/em&gt; be using a person merely as a means, but you don’t know for sure because you don’t know if anyone is inside the phone booth.&lt;/p&gt;

&lt;p&gt;A deontologist can claim one of two things about how to handle this new dilemma:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It is always wrong to push the phone booth as long as there is some nonzero probability that it contains a person.&lt;/li&gt;
  &lt;li&gt;There is some largest probability &lt;code&gt;p&lt;/code&gt; of a person being inside the phone booth such that pushing the booth is permissible; at any higher probability, it is wrong to push the booth off the bridge.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(Or you could claim that pushing the phone booth is always permissible, but that is generally regarded as a consequentialist position, so I will not discuss it.)&lt;/p&gt;

&lt;h2 id=&quot;position-1-all-actions-are-impermissible&quot;&gt;Position 1: All actions are impermissible&lt;/h2&gt;

&lt;p&gt;Perhaps it is always wrong to push the phone booth as long as there is some nonzero probability that it contains a person. That is, you ought not perform an action if that action has a nonzero probability of treating a person merely as a means rather than an end. Or, to use a more intuitive/less Kantian formulation, you ought not perform any action that has a nonzero probability of killing someone. But every action has nonzero probability of killing someone; therefore, all actions are impermissible.&lt;/p&gt;

&lt;p&gt;Why is it true that every action has nonzero probability of killing someone? This follows from the fact that we should assign nonzero probability to every proposition.&lt;/p&gt;

&lt;p&gt;Why should we assign nonzero probability to every proposition? Well, suppose you believe some proposition has probability 0. That means no amount of evidence or reasoning, no matter how strong, could ever convince you to change your mind. In formal terms, if C is a claim with zero probability of being true, and E is some evidence,&lt;/p&gt;

&lt;div align=&quot;center&quot;&gt;`P(C|E) = P(C) \cdot \frac{P(E|C)}{P(E)}`&lt;/div&gt;

&lt;p&gt;&lt;code&gt;P(C) = 0&lt;/code&gt;, which means &lt;code&gt;P(C|E)&lt;/code&gt; must be &lt;code&gt;0&lt;/code&gt; as well. For any statement, there must be &lt;em&gt;some&lt;/em&gt; evidence that would convince you to update your beliefs. As Dennis Lindley wrote in &lt;a href=&quot;https://www.amazon.com/Making-Decisions-2nd-Dennis-Lindley/dp/0471908088&quot;&gt;Making Decisions&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[I]f a decision ­maker thinks something cannot be true and interprets this to mean it has zero probability, he will never be influenced by any data, which is surely absurd. So leave a little probability for the moon being made of green cheese; it can be as small as one in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For more on this, read &lt;a href=&quot;https://www.lesswrong.com/posts/6FmqiAgS8h4EJm86s/how-to-convince-me-that-2-2-3&quot;&gt;How to Convince Me That 2 + 2 = 3&lt;/a&gt; and &lt;a href=&quot;https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/ooypcn7qFzsMcy53R&quot;&gt;Infinite Certainty&lt;/a&gt; from &lt;a href=&quot;https://www.lesswrong.com/&quot;&gt;LessWrong&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Alternatively, one might object on the basis of the definition of killing, which, one might argue, requires an intention to cause a person to die. Even if all actions might cause someone to die, that’s not the same as killing, and might not necessarily be impermissible. I would respond by returning to the phone booth dilemma. You do not know whether the phone booth contains a person, but you know it will stop the trolley; your intention is only to save the five workers on the track. In Kantian terms, you do not know whether you are treating someone merely as a means. Nonetheless, Position 1 asserts that pushing the phone booth is wrong. If that can be wrong even though you have no intention to cause someone to die, then it is similarly wrong to take an action that entails causing someone to die.&lt;/p&gt;

&lt;p&gt;One could go a step further and claim that causing someone to die is only &lt;em&gt;categorically&lt;/em&gt; wrong if the desired outcome of your action will come about only if the person dies—basically falling back to the Kantian notion that it is wrong to treat people merely as means. But it is still true that, for any action you take with a desired outcome, there is some nonzero probability that the outcome will come about only if someone dies. The phone booth dilemma demonstrates there could exist a situation in which you probabilistically cause someone’s death, which is categorically wrong according to Position 1; if the situation can exist, then any action could result in this situation with nonzero (if tiny) probability, and therefore all actions are morally wrong.&lt;/p&gt;

&lt;h2 id=&quot;position-2-deontology-reduces-to-consequentialism&quot;&gt;Position 2: Deontology reduces to consequentialism&lt;/h2&gt;

&lt;p&gt;Suppose the phone booth has some probability of containing a person. Let &lt;code&gt;p&lt;/code&gt; be the largest such probability at which it is permissible to push the phone booth off the bridge.&lt;/p&gt;

&lt;p&gt;Let’s talk about my favorite theorem: the &lt;a href=&quot;https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem&quot;&gt;Von Neumann-Morgenstern utility theorem&lt;/a&gt;, or VNM for short. VNM states that if we accept four axioms (Completeness, Transitivity, Continuity, and Independence), then there exists a utility function such that we best satisfy our values by maximizing the expected value of that function. Three of these axioms are uncontroversial&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, but deontology traditionally entails rejecting the axiom of continuity, which states that for three outcomes &lt;code&gt;L&lt;/code&gt;, &lt;code&gt;M&lt;/code&gt;, and &lt;code&gt;N&lt;/code&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;If \(L \le M \le N\), then there exists a probability p such that \(pL + (1 - p)N \sim M\)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;(where the &lt;code&gt;~&lt;/code&gt; symbol indicates that you are indifferent between the two choices.)&lt;/p&gt;

&lt;p&gt;Deontologists typically would claim that it is possible for &lt;code&gt;L&lt;/code&gt; to be categorically worse than &lt;code&gt;M&lt;/code&gt;, such that it is never worth accepting any probability of &lt;code&gt;L&lt;/code&gt;, no matter how small. (This is consistent with Position 1 above.)&lt;/p&gt;

&lt;p&gt;In the case of the phone booth dilemma, assume the following.
 &lt;code&gt;L&lt;/code&gt; = pushing the phone booth when it contains a person
&lt;code&gt;N&lt;/code&gt; = pushing the phone booth when it contains concrete
&lt;code&gt;M&lt;/code&gt; = not pushing the phone booth at all.&lt;/p&gt;

&lt;p&gt;From this, we can see that accepting Position 2 requires accepting Continuity. If we accept the other three VNM axioms, then the theorem holds: morality entails maximizing the expected value of some utility function.&lt;/p&gt;

&lt;p&gt;Note that, despite the terminology, just because we have a utility function doesn’t mean &lt;a href=&quot;https://www.utilitarianism.net/&quot;&gt;utilitarianism&lt;/a&gt; is true. Utilitarianism necessitates a particular type of utility function where utility is defined as the aggregate well-being minus suffering of all creatures. But VNM does entail &lt;a href=&quot;https://www.utilitarianism.net/types-of-utilitarianism#consequentialism&quot;&gt;consequentialism&lt;/a&gt;, because the only thing we care about is consequences—specifically, consequences as measured by the VNM utility function.&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;When we attempt to resolve the footbridge phone booth dilemma from a deontological perspective, we must take one of two positions. Position 2 reduces to consequentialism, so ultimately Position 1 is the only claim a deontologist can make.&lt;/p&gt;

&lt;p&gt;But if this position does not allow us to take any action that has a nonzero probability of causing an impermissible outcome (such as killing someone)&lt;sup id=&quot;fnref:3&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:3&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;, and we can never be absolutely certain that an action will &lt;em&gt;not&lt;/em&gt; result in such an action, then all actions are impermissible.&lt;/p&gt;

&lt;p&gt;The footbridge phone booth dilemma demonstrates the impermissibility of all actions for Kantian deontology as well as intuitionist “killing is wrong” deontology. The same reasoning applies to any form of ethics that creates categorical prescriptions. This even applies to mixed forms of consequentialism and deontology. For example, if we say that you ought to save as many lives as possible, but with the restriction that you must never cause anyone to die, we still run into the problem where any action has some probability of resulting in a death. For any ethical system that asserts that an outcome is categorically wrong, all actions are impermissible, because all actions have some probability of causing such an outcome.&lt;/p&gt;

&lt;p&gt;A system that declares all actions impermissible cannot prescribe actions, and thus fails as an ethical theory.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;We must stipulate that a phone booth without concrete or a person in it is not heavy enough to stop the trolley. If the phone booth by itself can stop the trolley, then we are not using the person inside the booth as a means to stop the trolley, so the act of pushing might not be wrong according to Kant. Whether it is wrong doesn’t matter for our purposes; what we care about is that in the scenario, the (potential) person in the phone booth is being treated merely as a means. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I know people who reject &lt;a href=&quot;https://en.wikipedia.org/wiki/Independence_of_irrelevant_alternatives&quot;&gt;Independence of Irrelevant Alternatives&lt;/a&gt; (IIA), which is conceptually related to but not the same as the VNM Axiom of Independence. You can reject the former while still accepting the latter. (IIA is irrelevant to VNM because VNM assumes that you have a fixed set of choices.) I have never heard of anyone seriously rejecting any of the VNM axioms other than Continuity, although admittedly I don’t know as much as I could about the subject. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:3&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;The distinction between deontology and consequentialism often is described as follows: deontology concerns actions/intentions, while consequentialism cares about consequences It may seem strange that I am talking about outcomes, but it still makes sense to speak of impermissible outcomes under deontology. For a deontological theory to make claims about the permissibility of actions, it must examine outcomes. Moral rules such as “do not kill” make absolute prescriptions on actions with respect to the outcomes that those actions produce. An action is wrong if it results in the outcome of a person dying (and when the circumstances of the action fit what we mean by “killing,” as distinct from merely causing someone to die). When I say “impermissible outcome” I mean an outcome such that, if an action would produce that outcome with probability 1, the action is impermissible. &lt;a href=&quot;#fnref:3&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>New Page: Convert Credences into a Bet</title>
				<pubDate>Sat, 07 Sep 2019 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2019/09/07/credences/</link>
				<guid isPermaLink="true">http://mdickens.me/2019/09/07/credences/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;a href=&quot;/credence-bet/&quot;&gt;https://mdickens.me/credence-bet/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In response to &lt;a href=&quot;https://www.facebook.com/groups/eahangout/permalink/2409482792471964/&quot;&gt;a Facebook post&lt;/a&gt;, I created a page to make it easy to make bets with people. If two people disagree about a claim and they want to bet on it, they can use this form to calculate how much money each person should bet. Each person should input their best estimate of the probability of the claim being true, and the form will tell them how much to bet. The form ensures that the bet will be fair for both participants–they both expect to win the same amount of money.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>What Are the Best TV Shows (According to IMDb Episode Ratings)?</title>
				<pubDate>Sat, 20 Jul 2019 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2019/07/20/best_tv_shows/</link>
				<guid isPermaLink="true">http://mdickens.me/2019/07/20/best_tv_shows/</guid>
                <description>
                  
                  
                  
                  &lt;script&gt;
var coll = document.getElementsByClassName(&quot;button&quot;);
var i;

for (i = 0; i &lt; coll.length; i++) {
  coll[i].addEventListener(&quot;click&quot;, function() {
    this.classList.toggle(&quot;active&quot;);
    var content = this.nextElementSibling;
    if (content.style.display === &quot;block&quot;) {
      content.style.display = &quot;none&quot;;
    } else {
      content.style.display = &quot;block&quot;;
    }
  });
}

function expandGeneric(suffix) {
  var content = document.getElementById(&quot;collapsible&quot; + suffix);
  var topButton = document.getElementById(&quot;top-button-&quot; + suffix);
  var botButton = document.getElementById(&quot;bot-button-&quot; + suffix);
  if (content.style.display === &quot;block&quot;) {
    content.style.display = &quot;none&quot;;
    topButton.innerHTML = &quot;Expand&quot;;
    botButton.innerHTML = &quot;Expand&quot;;
    botButton.style.display = &quot;none&quot;;
  } else {
    content.style.display = &quot;block&quot;;
    topButton.innerHTML = &quot;Collapse&quot;;
    botButton.innerHTML = &quot;Collapse&quot;;
    botButton.style.display = &quot;block&quot;;
  }

}
function expand1() {expandGeneric(&quot;1&quot;);}
function expand2() {expandGeneric(&quot;2&quot;);}
function expand3() {expandGeneric(&quot;3&quot;);}
&lt;/script&gt;

&lt;noscript&gt;Please enable JavaScript to properly view this post.&lt;/noscript&gt;

&lt;p&gt;Recently, I was browsing IMDb’s &lt;a href=&quot;https://www.imdb.com/chart/toptv&quot;&gt;list of top-rated TV shows&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/imdb-top-5.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;According to IMDb ratings, Planet Earth II is the second-best TV show of all time, with 9.5 stars out of 10. But if you look at the ratings of each individual episode, they range from 6.8 to 7.9&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/planet-earth.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;In general, the rating of a TV show usually differs from the average rating of that show’s episodes. What does the list of top TV shows look like if we sort by average episode rating instead of show rating? Perhaps voters have different motivations when they’re rating shows than when they’re rating individual episodes, and it could be interesting to see how the ratings differ.&lt;/p&gt;

&lt;p&gt;So I downloaded the &lt;a href=&quot;https://www.imdb.com/interfaces/&quot;&gt;IMDb public database&lt;/a&gt; to find out&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;This list shows the top 250 TV shows, ordered by average episode rating: (click the button to expand the list)&lt;/p&gt;

&lt;p&gt;&lt;button class=&quot;button&quot; id=&quot;top-button-1&quot; onclick=&quot;expand1()&quot;&gt;Expand&lt;/button&gt;&lt;/p&gt;
&lt;div class=&quot;collapsible-content&quot; id=&quot;collapsible1&quot;&gt;
  &lt;table&gt;
    &lt;tr&gt;
        &lt;td&gt;1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4541858&quot;&gt;Escuela Nocturna&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;10.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5879578&quot;&gt;Monsters and Money in the Morning&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;10.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3057772&quot;&gt;NFLVR Live!&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;10.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0845676&quot;&gt;Ekhaya: A Family Chronicle&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;10.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3532664&quot;&gt;Handlers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;10.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6204258&quot;&gt;PKassotis Art: All Access&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;10.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0343312&quot;&gt;Star Whores&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;10.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7467448&quot;&gt;The Healer (TV Series 2017)&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;10.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;9&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0416377&quot;&gt;Jozi Streets&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;10.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;10&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8900452&quot;&gt;The Blind Witness&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;11&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1978852&quot;&gt;Gregory Way TV&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;12&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5691046&quot;&gt;From The Bottom To The Top&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;13&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0401969&quot;&gt;Playing It Straight&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;14&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4362760&quot;&gt;Jamie&apos;s Fish Suppers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;15&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2266587&quot;&gt;Valley Fever&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;16&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5271842&quot;&gt;De Terugkeer van&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;17&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3348478&quot;&gt;Artifact&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;18&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6487260&quot;&gt;The Mike Nolan Show&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;19&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4517806&quot;&gt;Furusato-Time&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;20&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9471962&quot;&gt;Halka&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;21&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4317218&quot;&gt;Wake Up Call&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;22&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2165920&quot;&gt;Journey to a Dream&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;23&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9060306&quot;&gt;Can Kiriklari&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;24&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6512646&quot;&gt;EverReady&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;25&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6046050&quot;&gt;Transformers: Interstellar&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;26&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3303350&quot;&gt;Eski Hikaye&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;27&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8315300&quot;&gt;Maximo TV Red Carpet Arrivals&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;28&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2465494&quot;&gt;The High School&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;29&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5289208&quot;&gt;Kerchi: Dakarguli Gmirebi&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;30&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9915144&quot;&gt;Never Ever&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;31&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0069597&quot;&gt;A Picture of Katherine Mansfield&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;32&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1314131&quot;&gt;Victoria Silvstedt: My Perfect Life&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;33&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1367144&quot;&gt;ADHDtv: With Lew Marklin&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;34&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2416950&quot;&gt;Beautiful Homes&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;35&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5554840&quot;&gt;Ligeglad&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;36&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0072509&quot;&gt;Gruntovcani&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;37&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9471404&quot;&gt;The Chosen&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;38&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4501658&quot;&gt;Nameless: The Broken Way&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;39&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3893028&quot;&gt;Comedy Blaps&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;40&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0053548&quot;&gt;Winston Churchill: The Valiant Years&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;41&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098887&quot;&gt;Parenthood&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;42&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2579770&quot;&gt;Outside the Box&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;43&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0065313&quot;&gt;Nase malo misto&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;44&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7550774&quot;&gt;Black and White Love&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;45&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0983411&quot;&gt;Exposed&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;46&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1792500&quot;&gt;The Unemployment of Danny London&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;47&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2070443&quot;&gt;SheZow&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;48&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4598086&quot;&gt;Czech Hunter&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;49&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2505368&quot;&gt;Generell Behindert&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;50&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4302670&quot;&gt;The Enforcers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;51&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2410576&quot;&gt;Earth from Above&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;52&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0191730&quot;&gt;Slozna braca&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;53&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7366338&quot;&gt;Chernobyl&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;54&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6396252&quot;&gt;Ölene Kadar&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;55&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6226710&quot;&gt;Detective Anna&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;56&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2575722&quot;&gt;Hakkenden: Eight Dogs of the East&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;57&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4576148&quot;&gt;Dudes Being Dudes in Wine Country&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;58&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0388189&quot;&gt;Kurtulus&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;59&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0170961&quot;&gt;Jelenko&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;60&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4960444&quot;&gt;Bull&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;61&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7400384&quot;&gt;Project FIA&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;62&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0406370&quot;&gt;Your Reality Checked&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;63&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7972140&quot;&gt;Playboy Abroad: Adventures with Photographer Ana Dias&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;64&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0111962&quot;&gt;Fist of Fun&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;65&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2548038&quot;&gt;Banking Bad&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;66&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7379766&quot;&gt;The Missing Piece&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;67&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6350902&quot;&gt;Magic of Stella&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;68&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1826071&quot;&gt;Adini Feriha Koydum&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;69&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7375030&quot;&gt;Blatantly Christmas&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;70&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3483512&quot;&gt;Neighborhood Patrol&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;71&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0164292&quot;&gt;The Temptations&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;72&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3232262&quot;&gt;Sonic Boom&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;73&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0106141&quot;&gt;South Beach&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;74&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0140770&quot;&gt;Vietnam&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;75&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3909280&quot;&gt;Flat TV&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;76&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2393264&quot;&gt;Saturday Night Darren and Brose&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;77&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0077046&quot;&gt;Love for Lydia&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;78&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6260466&quot;&gt;The &apos;Other&apos; Love Story&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;79&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3921820&quot;&gt;Les Synaudes&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;80&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6940718&quot;&gt;A Frustrated Software Engineer&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;81&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2299801&quot;&gt;Chris/tina&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;82&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0118454&quot;&gt;The Ruby Wax Show&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;83&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4410688&quot;&gt;PREMature&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;84&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8232504&quot;&gt;Morning Changes Everything&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;85&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4262910&quot;&gt;The Hidden Persuaders Days&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;86&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0831933&quot;&gt;Aaj Robibar&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;87&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2564414&quot;&gt;You Eat What You Kill&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;88&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0206492&quot;&gt;Kamiondzije&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;89&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0436999&quot;&gt;Charlie Chalk&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;90&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7375150&quot;&gt;Simple Life&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;91&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0085043&quot;&gt;Kamiondzije 2&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;92&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7923710&quot;&gt;My Mister&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;93&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8081138&quot;&gt;Kalbimin Sultani&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;94&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0104541&quot;&gt;The Jacksons: An American Dream&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;95&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4702408&quot;&gt;Cold Bloods&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;96&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5268892&quot;&gt;Star Wars: The Idiotic Clone Trooper&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;97&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5359570&quot;&gt;Seddülbahir 32 Saat&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;98&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7748314&quot;&gt;From Russia to Iran: Crossing Wild Frontier&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;99&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7450814&quot;&gt;Koombiyo&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;100&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0209813&quot;&gt;Takin&apos; Over the Asylum&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;101&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2778780&quot;&gt;The Most Popular Girls in School&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;102&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0182627&quot;&gt;Rescue 77&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;103&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6234134&quot;&gt;Kings of Con&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;104&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0871620&quot;&gt;Entertainment Today Television&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;105&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7531930&quot;&gt;Yüz Yüze&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;106&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081899&quot;&gt;The Manions of America&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;107&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1190932&quot;&gt;Arestat la domiciliu&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;108&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0417377&quot;&gt;Wild Discovery&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;109&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0903130&quot;&gt;Fear Files&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;110&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2252520&quot;&gt;Eric&apos;s Place&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;111&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0112538&quot;&gt;Blue Murder&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;112&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6300100&quot;&gt;My Secret Romance&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;113&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3036054&quot;&gt;Indian Ocean&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;114&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0119101&quot;&gt;Fame L.A.&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;115&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1306572&quot;&gt;Matomena homata&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;116&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0066671&quot;&gt;Jackson 5ive&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;117&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6137742&quot;&gt;The Mandela Zone&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;118&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5785314&quot;&gt;På tur med far&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;119&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8470750&quot;&gt;Weed 4:20&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;120&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt10478048&quot;&gt;Eteros Ego - Hamenes Psyches&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;121&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5662468&quot;&gt;Will You Press the Button?&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;122&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0453433&quot;&gt;My New Best Friend&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;123&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2980528&quot;&gt;Chuno&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;124&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7115498&quot;&gt;Dolunay&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;125&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9114372&quot;&gt;Çarpisma&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;126&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0756474&quot;&gt;Enemies&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;127&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2626976&quot;&gt;Her Majesty&apos;s Prison: Aylesbury&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;128&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3911580&quot;&gt;BoOzy&apos; OS&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;129&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2560140&quot;&gt;Attack on Titan&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;130&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1789438&quot;&gt;Lovers in Prague&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;131&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7817966&quot;&gt;Are You Human Too?&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;132&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3992104&quot;&gt;Make Your Mark&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;133&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4468682&quot;&gt;Women Can&apos;t Be Funny&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;134&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9174066&quot;&gt;Net News&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;135&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2263344&quot;&gt;Horse Play&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;136&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9070590&quot;&gt;Falcon Crest&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;137&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9529500&quot;&gt;Kuzgun&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;138&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7614442&quot;&gt;Behind Closed Doors&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;139&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7379308&quot;&gt;The Essentials&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;140&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4560434&quot;&gt;Drunk Minecraft&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;141&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1598652&quot;&gt;Dead Gorgeous&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;142&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0074039&quot;&gt;Povratak otpisanih&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;143&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1096666&quot;&gt;Stories from the Stone Age&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;144&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0324036&quot;&gt;Sparkhouse&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;145&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt10073114&quot;&gt;The World Between Us&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;146&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8836898&quot;&gt;Harmony with A. R. Rahman&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;147&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2081225&quot;&gt;Framed&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;148&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9603114&quot;&gt;Remake&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;149&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8537104&quot;&gt;Extinct or Alive&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;150&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0108903&quot;&gt;ReBoot&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;151&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5804986&quot;&gt;The Big Lez Show&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;152&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6379582&quot;&gt;#SpongeyLeaks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;153&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3203700&quot;&gt;Culture in Decline&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;154&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6316900&quot;&gt;Confess&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;155&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2748608&quot;&gt;Max Steel&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;156&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5582392&quot;&gt;The Other Kingdom&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;157&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7386598&quot;&gt;The Detectives: Murder on the Streets&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;158&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7944204&quot;&gt;8 Gun&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;159&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0491739&quot;&gt;In the Nick of Time&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;160&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2186101&quot;&gt;Mrs Biggs&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;161&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7710222&quot;&gt;Black Coffee&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;162&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0272370&quot;&gt;CBS Storybreak&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;163&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2378536&quot;&gt;The Hero&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;164&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5978570&quot;&gt;Nirvanna the Band the Show&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;165&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0274297&quot;&gt;La vie, la vie&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;166&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8595766&quot;&gt;Yeh Meri Family&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;167&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4603632&quot;&gt;I Bought a Rainforest&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;168&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0103486&quot;&gt;Mann &amp;amp; Machine&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;169&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5260776&quot;&gt;The Cobblestone Corridor&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;170&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1602156&quot;&gt;Atlantis Rising: The Making of &apos;Atlantis Down&apos;&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;171&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2816662&quot;&gt;Alien Encounters&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;172&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1984080&quot;&gt;The F*N Show&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;173&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6410722&quot;&gt;Red Blooded&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;174&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7268028&quot;&gt;No Clue&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;175&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2405500&quot;&gt;How Booze Built America&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;176&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8250698&quot;&gt;Feeline Productions&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;177&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4328764&quot;&gt;Subject to Change&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;178&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0447591&quot;&gt;I Married a Princess&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;179&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8292460&quot;&gt;The Great Indian Dysfunctional Family&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;180&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0447763&quot;&gt;High Spirits with Shirley Ghostman&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;181&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4452630&quot;&gt;Code Black&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;182&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081955&quot;&gt;Velo misto&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;183&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1524415&quot;&gt;Spirited&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;184&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0883869&quot;&gt;Summer in America&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;185&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5646594&quot;&gt;Hwarang&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;186&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7909092&quot;&gt;Tom vs. Time&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;187&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7810684&quot;&gt;Örgü&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;188&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0090549&quot;&gt;A Year in the Life&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;189&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9541638&quot;&gt;Earth&apos;s Great Rivers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;190&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1952113&quot;&gt;Everyone Wants Theirs&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;191&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0891282&quot;&gt;Üb immer Treu nach Möglichkeit&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;192&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6244192&quot;&gt;Leah Remini: Scientology and the Aftermath&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;193&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8602408&quot;&gt;Holly Hobbie&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;194&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4073180&quot;&gt;Liar Game&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;195&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0085085&quot;&gt;SCTV Channel&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;196&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0390705&quot;&gt;Flop Show&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;197&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0106956&quot;&gt;Framed&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;198&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096694&quot;&gt;Saved by the Bell&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;199&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5302120&quot;&gt;Hayat Sarkisi&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;200&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1827045&quot;&gt;Debra&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;201&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8254348&quot;&gt;ImMATURE&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;202&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6220228&quot;&gt;Mogeko Castle with Voices&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;203&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7151596&quot;&gt;Romil and Jugal&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;204&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1875471&quot;&gt;Jean Claude Van Damme: Behind Closed Doors&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;205&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4901376&quot;&gt;Thunderbirds&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;206&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1730549&quot;&gt;Lost Land of the Tiger&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;207&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1707394&quot;&gt;Working Class&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;208&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8373190&quot;&gt;The Proposal&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;209&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8199790&quot;&gt;Joe Pera Talks with You&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;210&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7094780&quot;&gt;Mr. Sunshine&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;211&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6916352&quot;&gt;Puteshestviye na teplokhode po marshrutu Moskva - Sankt-Peterburg - Moskva 2009&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;212&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt10394776&quot;&gt;Stabbed In The Back&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;213&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5896814&quot;&gt;Tanhaiyan&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;214&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096642&quot;&gt;Maid Marian and Her Merry Men&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;215&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7153034&quot;&gt;The Bureau of Magical Things&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;216&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0108725&quot;&gt;Christy&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;217&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4923738&quot;&gt;The Devil&apos;s Game&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;218&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1290483&quot;&gt;Avant*garde&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;219&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0852785&quot;&gt;Free to Choose&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;220&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0485747&quot;&gt;XSTV: Xtreme Sport Television&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;221&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4339192&quot;&gt;Kill Me, Heal Me&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;222&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5347232&quot;&gt;Bill the Therapist&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;223&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7849864&quot;&gt;Grey&apos;s Anatomy: B-Team&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;224&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8078816&quot;&gt;Something in the Rain&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;225&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0809488&quot;&gt;The Godfather Saga&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;226&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7137906&quot;&gt;When They See Us&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;227&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1047142&quot;&gt;Andrew Marr&apos;s History of Modern Britain&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;228&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1540626&quot;&gt;Rex Is Not Your Lawyer&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;229&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8139230&quot;&gt;BTS: Burn the Stage&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;230&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1472534&quot;&gt;My Breasts Could Kill Me&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;231&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9475088&quot;&gt;Vader: A Star Wars Theory Fan Series&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;232&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3519424&quot;&gt;Science Channel Presents&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;233&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4497794&quot;&gt;The Last Ship Prequel: Dr. Scott&apos;s Video Journal&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;234&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0135096&quot;&gt;The Living Planet&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;235&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt10530900&quot;&gt;Gullak&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;236&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6019444&quot;&gt;Urban Phantasies&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;237&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1909015&quot;&gt;Undercover&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;238&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0108780&quot;&gt;Breaking News&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;239&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4354996&quot;&gt;Big Giant Swords&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;240&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1736676&quot;&gt;Talk 2000&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;241&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0349019&quot;&gt;Tabu&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;242&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt10394800&quot;&gt;The Planets&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;243&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1370331&quot;&gt;Dagvaktin&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;244&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9893278&quot;&gt;Strange Therapy&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;245&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0088916&quot;&gt;Christopher Columbus&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;246&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2689336&quot;&gt;Les réseaux de l&apos;extrême&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;247&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7665182&quot;&gt;TVF Bachelors&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;248&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0280247&quot;&gt;Are You Dave Gorman?&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;249&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0191747&quot;&gt;Vruc vetar&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;250&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0241120&quot;&gt;Nana&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;
&lt;/table&gt;

&lt;/div&gt;
&lt;p&gt;&lt;button class=&quot;button&quot; id=&quot;bot-button-1&quot; onclick=&quot;expand1()&quot; style=&quot;display: none&quot;&gt;Expand&lt;/button&gt;&lt;/p&gt;

&lt;p&gt;We immediately run into a problem with this list: it’s dominated by little-known shows that only have about five votes per episode. If an episode has five votes, even if they’re all 10-star ratings, I wouldn’t feel confident that it really deserves 10 stars–perhaps if more people watched the show, most of them wouldn’t like it.&lt;/p&gt;

&lt;p&gt;There’s a known solution to this problem, which IMDb used to implement. (Their new ranking algorithms are more sophisticated (and secret), but the old system is good enough for us.)&lt;/p&gt;

&lt;p&gt;Intuitively, the more votes a show has, the more trustworthy its rating. We can formalize the intuition with this formula:&lt;/p&gt;

&lt;div align=&quot;center&quot;&gt;rating = `(v / (v + m)) * r + (m / (v + m)) * c`&lt;/div&gt;

&lt;p&gt;where&lt;/p&gt;

&lt;p&gt;`r` is the unweighted rating of the episode;&lt;/p&gt;
&lt;p&gt;`c` is the average rating across all episodes;&lt;/p&gt;
&lt;p&gt;`v` is the number of votes;&lt;/p&gt;
&lt;p&gt;`m` is the vote minimum.&lt;/p&gt;

&lt;p&gt;In effect, it’s as if we give every TV episode &lt;code&gt;m&lt;/code&gt; extra votes at the average rating. If we use &lt;code&gt;m&lt;/code&gt; = 1000, then an episode with only five votes will have a weighted score that’s close to the average across all episodes; whereas for an episode with 10,000 votes, the 1000 extra votes won’t matter much.&lt;/p&gt;

&lt;p&gt;This table displays the top 250 TV shows using this weighting algorithm, where &lt;code&gt;m&lt;/code&gt; = 1000 and the average episode rating is 7.4. For comparison, I calculated the list of top TV shows using the same weighting algorithm&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;, but with a 25,000-vote minimum (shows have a higher vote minimum because they tend to get a lot more votes than episodes). The table displays the episode-based ranking on the left and the show-based ranking on the right. Click “Expand” to view:&lt;/p&gt;

&lt;p&gt;&lt;button class=&quot;button&quot; id=&quot;top-button-2&quot; onclick=&quot;expand2()&quot;&gt;Expand&lt;/button&gt;&lt;/p&gt;
&lt;div class=&quot;collapsible-content&quot; id=&quot;collapsible2&quot;&gt;
  &lt;table&gt;
    &lt;tr&gt;
        &lt;th&gt;&lt;/th&gt;
        &lt;th&gt;Title (episode-ranking)&lt;/th&gt;
        &lt;th&gt;Rating&lt;/th&gt;
        &lt;th&gt;Title (show-ranking)&lt;/th&gt;
        &lt;th&gt;Rating&lt;/th&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
        &lt;td&gt;1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7366338&quot;&gt;Chernobyl&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0903747&quot;&gt;Breaking Bad&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0944947&quot;&gt;Game of Thrones&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7366338&quot;&gt;Chernobyl&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0903747&quot;&gt;Breaking Bad&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0944947&quot;&gt;Game of Thrones&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3322312&quot;&gt;Daredevil&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0185906&quot;&gt;Band of Brothers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1475582&quot;&gt;Sherlock&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2861424&quot;&gt;Rick and Morty&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2560140&quot;&gt;Attack on Titan&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0306414&quot;&gt;The Wire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4574334&quot;&gt;Stranger Things&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0795176&quot;&gt;Planet Earth&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3921820&quot;&gt;Les Synaudes&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1475582&quot;&gt;Sherlock&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;9&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2699128&quot;&gt;The Leftovers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0141842&quot;&gt;The Sopranos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;10&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2243973&quot;&gt;Hannibal&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0417299&quot;&gt;Avatar: The Last Airbender&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;11&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1839578&quot;&gt;Person of Interest&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5491994&quot;&gt;Planet Earth II&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;12&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9475088&quot;&gt;Vader: A Star Wars Theory Fan Series&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2356777&quot;&gt;True Detective&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;13&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7137906&quot;&gt;When They See Us&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2395695&quot;&gt;Cosmos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;14&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2356777&quot;&gt;True Detective&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2802850&quot;&gt;Fargo&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;15&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2431438&quot;&gt;Sense8&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0108778&quot;&gt;Friends&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;16&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3911580&quot;&gt;BoOzy&apos; OS&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4574334&quot;&gt;Stranger Things&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;17&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5675620&quot;&gt;The Punisher&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0303461&quot;&gt;Firefly&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;18&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0185906&quot;&gt;Band of Brothers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0877057&quot;&gt;Death Note&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;19&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0411008&quot;&gt;Lost&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2085059&quot;&gt;Black Mirror&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;20&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2707408&quot;&gt;Narcos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1856010&quot;&gt;House of Cards&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;21&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4145054&quot;&gt;Shadowhunters&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1831164&quot;&gt;Leyla and Mecnun&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;22&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0303461&quot;&gt;Firefly&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0475784&quot;&gt;Westworld&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;23&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0773262&quot;&gt;Dexter&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0386676&quot;&gt;The Office&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;24&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0141842&quot;&gt;The Sopranos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2707408&quot;&gt;Narcos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;25&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4093826&quot;&gt;Twin Peaks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0367279&quot;&gt;Arrested Development&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;26&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4508902&quot;&gt;One Punch Man&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098904&quot;&gt;Seinfeld&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;27&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0072500&quot;&gt;Fawlty Towers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1355642&quot;&gt;Fullmetal Alchemist: Brotherhood&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;28&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0108778&quot;&gt;Friends&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2442560&quot;&gt;Peaky Blinders&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;29&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1598754&quot;&gt;24: Live Another Day&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0773262&quot;&gt;Dexter&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;30&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8416494&quot;&gt;Doom Patrol&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0412142&quot;&gt;House&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;31&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1124373&quot;&gt;Sons of Anarchy&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4202274&quot;&gt;The Filthy Frank Show&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;32&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4452630&quot;&gt;Code Black&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096697&quot;&gt;The Simpsons&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;33&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0193676&quot;&gt;Freaks and Geeks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3322312&quot;&gt;Daredevil&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;34&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0412142&quot;&gt;House&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098936&quot;&gt;Twin Peaks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;35&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2017109&quot;&gt;Banshee&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0121955&quot;&gt;South Park&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;36&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8127644&quot;&gt;The Adventures of Sherlock Holmes and Dr. Watson&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0384766&quot;&gt;Rome&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;37&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2375692&quot;&gt;Black Sails&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3032476&quot;&gt;Better Call Saul&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;38&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3487382&quot;&gt;Forever&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4508902&quot;&gt;One Punch Man&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;39&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2628232&quot;&gt;Penny Dreadful&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2560140&quot;&gt;Attack on Titan&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;40&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1695360&quot;&gt;The Legend of Korra&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0193676&quot;&gt;Freaks and Geeks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;41&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7493974&quot;&gt;Bodyguard&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4742876&quot;&gt;TVF Pitchers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;42&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098936&quot;&gt;Twin Peaks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0436992&quot;&gt;Doctor Who&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;43&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2632424&quot;&gt;The Originals&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0106179&quot;&gt;The X-Files&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;44&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4189022&quot;&gt;Ash vs Evil Dead&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0052520&quot;&gt;The Twilight Zone&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;45&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4635276&quot;&gt;Master of None&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1586680&quot;&gt;Shameless&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;46&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0417299&quot;&gt;Avatar: The Last Airbender&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0472954&quot;&gt;It&apos;s Always Sunny in Philadelphia&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;47&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2188671&quot;&gt;Bates Motel&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0213338&quot;&gt;Cowboy Bebop&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;48&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4635282&quot;&gt;The OA&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0103359&quot;&gt;Batman: The Animated Series&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;49&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3749900&quot;&gt;Gotham&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2306299&quot;&gt;Vikings&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;50&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1865718&quot;&gt;Gravity Falls&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1606375&quot;&gt;Downton Abbey&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;51&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2649356&quot;&gt;Sharp Objects&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0407362&quot;&gt;Battlestar Galactica&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;52&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2879552&quot;&gt;11.22.63&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3530232&quot;&gt;Last Week Tonight with John Oliver&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;53&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0979432&quot;&gt;Boardwalk Empire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0112130&quot;&gt;Pride and Prejudice&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;54&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1567432&quot;&gt;Teen Wolf&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1124373&quot;&gt;Sons of Anarchy&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.5&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;55&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4230076&quot;&gt;The Defenders&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1486217&quot;&gt;Archer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.5&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;56&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1984119&quot;&gt;Feud&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5753856&quot;&gt;Dark&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.5&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;57&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1474684&quot;&gt;Luther&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6763664&quot;&gt;The Haunting of Hill House&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.5&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;58&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0804503&quot;&gt;Mad Men&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0248654&quot;&gt;Six Feet Under&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.5&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;59&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1870479&quot;&gt;The Newsroom&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0072500&quot;&gt;Fawlty Towers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.5&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1439629&quot;&gt;Community&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0804503&quot;&gt;Mad Men&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.5&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;61&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1869454&quot;&gt;Good Omens&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1266020&quot;&gt;Parks and Recreation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.5&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;62&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098904&quot;&gt;Seinfeld&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0214341&quot;&gt;Dragon Ball Z&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;63&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096657&quot;&gt;Mr. Bean&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1628033&quot;&gt;Top Gear&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0384766&quot;&gt;Rome&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0979432&quot;&gt;Boardwalk Empire&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;65&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0290978&quot;&gt;The Office&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5675620&quot;&gt;The Punisher&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;66&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2177461&quot;&gt;A Discovery of Witches&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6077448&quot;&gt;Sacred Games&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;67&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0934814&quot;&gt;Chuck&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1865718&quot;&gt;Gravity Falls&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;68&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4742876&quot;&gt;TVF Pitchers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0460681&quot;&gt;Supernatural&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;69&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2234222&quot;&gt;Orphan Black&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1632701&quot;&gt;Suits&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;70&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0092337&quot;&gt;Dekalog&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6468322&quot;&gt;Money Heist&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;71&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1227926&quot;&gt;Dr. Horrible&apos;s Sing-Along Blog&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4158110&quot;&gt;Mr. Robot&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;72&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1119644&quot;&gt;Fringe&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081846&quot;&gt;Cosmos&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;73&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1405406&quot;&gt;The Vampire Diaries&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0264235&quot;&gt;Curb Your Enthusiasm&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;74&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5687612&quot;&gt;Fleabag&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0348914&quot;&gt;Deadwood&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;75&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2657262&quot;&gt;Star-Crossed&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4786824&quot;&gt;The Crown&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;76&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0348914&quot;&gt;Deadwood&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0118421&quot;&gt;Oz&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;77&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096548&quot;&gt;Blackadder Goes Forth&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0063929&quot;&gt;Monty Python&apos;s Flying Circus&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;78&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0306414&quot;&gt;The Wire&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1533395&quot;&gt;Life&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;79&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0386676&quot;&gt;The Office&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5555260&quot;&gt;This Is Us&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3148266&quot;&gt;12 Monkeys&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3920596&quot;&gt;Big Little Lies&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;81&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5511582&quot;&gt;Timeless&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1442449&quot;&gt;Spartacus&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;82&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3475734&quot;&gt;Agent Carter&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2243973&quot;&gt;Hannibal&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;83&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1489428&quot;&gt;Justified&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0149460&quot;&gt;Futurama&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;84&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0460649&quot;&gt;How I Met Your Mother&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1439629&quot;&gt;Community&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;85&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1843230&quot;&gt;Once Upon a Time&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1870479&quot;&gt;The Newsroom&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.4&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;86&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5435008&quot;&gt;Gilmore Girls: A Year in the Life&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0411008&quot;&gt;Lost&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;87&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1442449&quot;&gt;Spartacus&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0455275&quot;&gt;Prison Break&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;88&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1856010&quot;&gt;House of Cards&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0387199&quot;&gt;Entourage&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;89&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2937900&quot;&gt;The Knick&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0286486&quot;&gt;The Shield&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;90&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1836037&quot;&gt;Longmire&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081912&quot;&gt;Only Fools and Horses....&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;91&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6586318&quot;&gt;Patrick Melrose&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0092455&quot;&gt;Star Trek: The Next Generation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;92&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0285331&quot;&gt;24&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1442437&quot;&gt;Modern Family&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;93&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2149175&quot;&gt;The Americans&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7137906&quot;&gt;When They See Us&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;94&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1327801&quot;&gt;Glee&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5834204&quot;&gt;The Handmaid&apos;s Tale&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;95&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5516154&quot;&gt;Godless&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7450814&quot;&gt;Koombiyo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;96&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0367279&quot;&gt;Arrested Development&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5290382&quot;&gt;Mindhunter&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;97&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1606375&quot;&gt;Downton Abbey&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0121220&quot;&gt;Dragon Ball Z&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;98&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5651844&quot;&gt;Travelers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9561862&quot;&gt;Love, Death &amp;amp; Robots&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;99&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1266020&quot;&gt;Parks and Recreation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0487831&quot;&gt;The IT Crowd&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2249364&quot;&gt;Broadchurch&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5189670&quot;&gt;Making a Murderer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;101&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3488298&quot;&gt;American Crime&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1489428&quot;&gt;Justified&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;102&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3581932&quot;&gt;And Then There Were None&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1758429&quot;&gt;Spartacus: Gods of the Arena&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;103&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4145384&quot;&gt;Scream Queens&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0200276&quot;&gt;The West Wing&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;104&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2357547&quot;&gt;Jessica Jones&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0285403&quot;&gt;Scrubs&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;105&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0314979&quot;&gt;Battlestar Galactica&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5712554&quot;&gt;The Grand Tour&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;106&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3718778&quot;&gt;Over the Garden Wall&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1119644&quot;&gt;Fringe&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;107&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2085059&quot;&gt;Black Mirror&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1474684&quot;&gt;Luther&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;108&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0487831&quot;&gt;The IT Crowd&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096548&quot;&gt;Blackadder Goes Forth&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;109&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1830617&quot;&gt;Grimm&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2575988&quot;&gt;Silicon Valley&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;110&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3489184&quot;&gt;Constantine&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8809646&quot;&gt;College Romance&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;111&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2211065&quot;&gt;The Other Sport&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3647998&quot;&gt;Taboo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;112&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0162065&quot;&gt;Angel&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2401256&quot;&gt;The Night Of&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;113&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1637727&quot;&gt;The Killing&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6769208&quot;&gt;Blue Planet II&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;114&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2193041&quot;&gt;Beauty and the Beast&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0758745&quot;&gt;Friday Night Lights&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;115&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0106179&quot;&gt;The X-Files&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1520211&quot;&gt;The Walking Dead&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;116&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5368542&quot;&gt;The Exorcist&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1305826&quot;&gt;Adventure Time&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;117&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1219024&quot;&gt;Castle&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3006802&quot;&gt;Outlander&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;118&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2384811&quot;&gt;Utopia&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0285331&quot;&gt;24&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;119&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3655448&quot;&gt;The Young Pope&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1733785&quot;&gt;Bron/Broen&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;120&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2402207&quot;&gt;The Last Ship&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2467372&quot;&gt;Brooklyn Nine-Nine&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;121&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0445114&quot;&gt;Extras&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0460649&quot;&gt;How I Met Your Mother&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;122&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0118276&quot;&gt;Buffy the Vampire Slayer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1839578&quot;&gt;Person of Interest&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;123&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0279600&quot;&gt;Smallville&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096657&quot;&gt;Mr. Bean&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;124&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5189670&quot;&gt;Making a Murderer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0290978&quot;&gt;The Office&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;125&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0374463&quot;&gt;The Pacific&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0314979&quot;&gt;Battlestar Galactica&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;126&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0118421&quot;&gt;Oz&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1695360&quot;&gt;The Legend of Korra&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;127&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0248654&quot;&gt;Six Feet Under&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0353049&quot;&gt;Chappelle&apos;s Show&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.3&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1837642&quot;&gt;Revenge&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5788792&quot;&gt;The Marvelous Mrs. Maisel&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;129&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3012698&quot;&gt;Olive Kitteridge&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2431438&quot;&gt;Sense8&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;130&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0111987&quot;&gt;Goosebumps&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0388629&quot;&gt;One Piece: Wan pîsu&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;131&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5580540&quot;&gt;Santa Clarita Diet&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0421357&quot;&gt;Fullmetal Alchemist&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;132&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2094262&quot;&gt;Da Vinci&apos;s Demons&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1796960&quot;&gt;Homeland&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;133&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0387199&quot;&gt;Entourage&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3398228&quot;&gt;BoJack Horseman&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;134&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4299972&quot;&gt;The Jinx: The Life and Deaths of Robert Durst&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1492966&quot;&gt;Louie&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;135&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4834206&quot;&gt;A Series of Unfortunate Events&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7221388&quot;&gt;Cobra Kai&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;136&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1103987&quot;&gt;Leverage&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0863046&quot;&gt;Flight of the Conchords&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;137&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4422836&quot;&gt;Limitless&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2788432&quot;&gt;American Crime Story&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;138&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0088484&quot;&gt;Blackadder II&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0296310&quot;&gt;The Blue Planet&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;139&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4047038&quot;&gt;Dirk Gently&apos;s Holistic Detective Agency&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0904208&quot;&gt;Californication&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;140&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2874692&quot;&gt;When Calls the Heart&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0187664&quot;&gt;Spaced&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;141&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0092324&quot;&gt;Blackadder the Third&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0387764&quot;&gt;Peep Show&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;142&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3865236&quot;&gt;Into the Badlands&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1806234&quot;&gt;Human Planet&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;143&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0079902&quot;&gt;Sherlock Holmes and Doctor Watson: The Acquaintance&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0071075&quot;&gt;The World at War&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;144&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0103352&quot;&gt;Are You Afraid of the Dark?&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7927936&quot;&gt;Flames&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;145&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0995832&quot;&gt;Generation Kill&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0898266&quot;&gt;The Big Bang Theory&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;146&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2543328&quot;&gt;TURN: Washington&apos;s Spies&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2098220&quot;&gt;Hunter x Hunter&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;147&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2294189&quot;&gt;The Fall&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4189022&quot;&gt;Ash vs Evil Dead&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;148&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2654580&quot;&gt;Almost Human&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8595766&quot;&gt;Yeh Meri Family&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;149&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4517806&quot;&gt;Furusato-Time&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0995832&quot;&gt;Generation Kill&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;150&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0491738&quot;&gt;Psych&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1909015&quot;&gt;Undercover&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;151&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4643084&quot;&gt;Counterpart&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0118480&quot;&gt;Stargate SG-1&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;152&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3322314&quot;&gt;Luke Cage&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0491738&quot;&gt;Psych&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;153&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1592154&quot;&gt;Nikita&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3718778&quot;&gt;Over the Garden Wall&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;154&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2136443&quot;&gt;The Playlist&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2017109&quot;&gt;Banshee&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;155&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3766376&quot;&gt;Vice Principals&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6873658&quot;&gt;The Mechanism&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.2&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;156&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0925266&quot;&gt;Pushing Daisies&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2149175&quot;&gt;The Americans&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;157&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2183404&quot;&gt;Rectify&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4299972&quot;&gt;The Jinx: The Life and Deaths of Robert Durst&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;158&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1196946&quot;&gt;The Mentalist&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0994314&quot;&gt;Code Geass: Lelouch of the Rebellion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;159&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096694&quot;&gt;Saved by the Bell&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1910272&quot;&gt;Steins;Gate&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;160&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2262456&quot;&gt;The Hollow Crown&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0237123&quot;&gt;Coupling&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;161&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9130692&quot;&gt;Dynasties&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3230854&quot;&gt;The Expanse&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;162&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0285403&quot;&gt;Scrubs&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4288182&quot;&gt;Atlanta&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;163&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9253866&quot;&gt;Our Planet&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1227926&quot;&gt;Dr. Horrible&apos;s Sing-Along Blog&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;164&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8595766&quot;&gt;Yeh Meri Family&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0182576&quot;&gt;Family Guy&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;165&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1759761&quot;&gt;Veep&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1358522&quot;&gt;White Collar&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;166&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1492966&quot;&gt;Louie&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7767422&quot;&gt;Sex Education&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;167&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5425186&quot;&gt;Horace and Pete&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0262150&quot;&gt;Black Books&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;168&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0407362&quot;&gt;Battlestar Galactica&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0088484&quot;&gt;Blackadder II&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;169&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081846&quot;&gt;Cosmos&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5071412&quot;&gt;Ozark&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;170&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8682948&quot;&gt;The Act&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2632424&quot;&gt;The Originals&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;171&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0187664&quot;&gt;Spaced&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0988824&quot;&gt;Naruto: Shippûden&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;172&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3877200&quot;&gt;The Missing&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0112159&quot;&gt;Neon Genesis Evangelion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;173&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5011816&quot;&gt;Sneaky Pete&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.8&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0280249&quot;&gt;Dragon Ball&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;174&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2654620&quot;&gt;The Strain&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9253866&quot;&gt;Our Planet&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;175&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8694364&quot;&gt;Years and Years&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0092324&quot;&gt;Blackadder the Third&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;176&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0805663&quot;&gt;Jericho&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0275137&quot;&gt;Justice League&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;177&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0237123&quot;&gt;Coupling&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2092588&quot;&gt;Frozen Planet&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;178&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2189461&quot;&gt;Marco Polo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0412253&quot;&gt;Veronica Mars&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;179&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0149460&quot;&gt;Futurama&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2234222&quot;&gt;Orphan Black&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;180&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1199099&quot;&gt;Merlin&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081834&quot;&gt;Das Boot&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;181&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1135300&quot;&gt;Dollhouse&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1637727&quot;&gt;The Killing&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;182&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1699748&quot;&gt;Hell on Wheels&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4093826&quot;&gt;Twin Peaks&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;183&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2581458&quot;&gt;Looking&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0374463&quot;&gt;The Pacific&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;184&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2297757&quot;&gt;Nathan for You&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0423731&quot;&gt;Samurai Champloo&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;185&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1733785&quot;&gt;Bron/Broen&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4270492&quot;&gt;Billions&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;186&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0278238&quot;&gt;Samurai Jack&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2375692&quot;&gt;Black Sails&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;187&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4396630&quot;&gt;The Gifted&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4052886&quot;&gt;Lucifer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;188&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098887&quot;&gt;Parenthood&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4179452&quot;&gt;The Last Kingdom&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;189&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1220617&quot;&gt;The Inbetweeners&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0094525&quot;&gt;Poirot&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;190&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4592410&quot;&gt;The Get Down&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0088509&quot;&gt;Dragon Ball&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;191&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3909280&quot;&gt;Flat TV&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0111958&quot;&gt;Father Ted&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;192&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2618986&quot;&gt;Wayward Pines&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0092337&quot;&gt;Dekalog&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;193&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4122068&quot;&gt;Humans&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1641384&quot;&gt;Young Justice&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;194&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8679236&quot;&gt;Who Is America?&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0319969&quot;&gt;Carnivàle&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;195&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0863046&quot;&gt;Flight of the Conchords&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0417349&quot;&gt;North &amp;amp; South&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;196&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5580146&quot;&gt;Maniac&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2249364&quot;&gt;Broadchurch&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;197&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6040674&quot;&gt;Waco&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2937900&quot;&gt;The Knick&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;198&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7520794&quot;&gt;Russian Doll&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0925266&quot;&gt;Pushing Daisies&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;199&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0200276&quot;&gt;The West Wing&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5114356&quot;&gt;Legion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;200&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2070791&quot;&gt;Revolution&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1220617&quot;&gt;The Inbetweeners&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;201&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0086661&quot;&gt;The Adventures of Sherlock Holmes&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8398600&quot;&gt;After Life&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;202&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2930604&quot;&gt;Star Wars Rebels&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0934814&quot;&gt;Chuck&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;203&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6466208&quot;&gt;Breathe&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0118276&quot;&gt;Buffy the Vampire Slayer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;204&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0790772&quot;&gt;Rules of Engagement&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0163507&quot;&gt;Whose Line Is It Anyway?&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;205&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6877772&quot;&gt;American Vandal&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2249007&quot;&gt;Ray Donovan&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;206&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2543312&quot;&gt;Halt and Catch Fire&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0060028&quot;&gt;Star Trek&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;207&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0075520&quot;&gt;Jesus of Nazareth&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9432978&quot;&gt;Kota Factory&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;208&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2452242&quot;&gt;Happy!&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2699128&quot;&gt;The Leftovers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;209&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6958022&quot;&gt;The Defiant Ones&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0068098&quot;&gt;M*A*S*H&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.1&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;210&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6866266&quot;&gt;Escape at Dannemora&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1344204&quot;&gt;Blue Mountain State&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;211&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0830361&quot;&gt;The Lost Room&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3205802&quot;&gt;How to Get Away with Murder&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;212&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0472027&quot;&gt;John Adams&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0496424&quot;&gt;30 Rock&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;213&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3186130&quot;&gt;Vinyl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2261227&quot;&gt;Altered Carbon&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;214&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3232262&quot;&gt;Sonic Boom&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1442462&quot;&gt;The Good Wife&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;215&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1578873&quot;&gt;Pretty Little Liars&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2628232&quot;&gt;Penny Dreadful&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;216&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4209256&quot;&gt;Colony&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0053488&quot;&gt;The Bugs Bunny Show&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;217&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081912&quot;&gt;Only Fools and Horses....&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2372162&quot;&gt;Orange Is the New Black&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;218&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0112130&quot;&gt;Pride and Prejudice&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2100976&quot;&gt;Impractical Jokers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;219&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1051220&quot;&gt;The Shannara Chronicles&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1844624&quot;&gt;American Horror Story&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;220&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2660734&quot;&gt;The Tomorrow People&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0103584&quot;&gt;X-Men: The Animated Series&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;221&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1883092&quot;&gt;Generation War&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5288312&quot;&gt;Skam&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;222&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0310455&quot;&gt;Foyle&apos;s War&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0380136&quot;&gt;QI&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;223&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0904208&quot;&gt;Californication&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1795096&quot;&gt;Behzat Ç.: Bir Ankara Polisiyesi&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;224&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5288312&quot;&gt;Skam&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0115147&quot;&gt;The Daily Show&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;225&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7146600&quot;&gt;Wolf&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4635276&quot;&gt;Master of None&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;226&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6461736&quot;&gt;The Brave&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0409591&quot;&gt;Naruto&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;227&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1602156&quot;&gt;Atlantis Rising: The Making of &apos;Atlantis Down&apos;&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1598754&quot;&gt;24: Live Another Day&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;228&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5137338&quot;&gt;Victoria&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0238784&quot;&gt;Gilmore Girls&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;229&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1358522&quot;&gt;White Collar&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2188671&quot;&gt;Bates Motel&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;230&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0813715&quot;&gt;Heroes&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2049116&quot;&gt;Gomorrah&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;231&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0111958&quot;&gt;Father Ted&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1548850&quot;&gt;Misfits&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;232&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3663490&quot;&gt;The Librarians&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0402711&quot;&gt;Boston Legal&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;233&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2548418&quot;&gt;Jonathan Strange &amp;amp; Mr Norrell&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0278238&quot;&gt;Samurai Jack&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;234&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2548038&quot;&gt;Banking Bad&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2741602&quot;&gt;The Blacklist&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;235&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0368530&quot;&gt;One Tree Hill&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5687612&quot;&gt;Fleabag&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;236&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5620076&quot;&gt;Chance&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0472027&quot;&gt;John Adams&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;237&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3305096&quot;&gt;Galavant&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1196946&quot;&gt;The Mentalist&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;238&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098825&quot;&gt;House of Cards&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3487382&quot;&gt;Forever&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;239&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3520702&quot;&gt;Bloodline&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0458254&quot;&gt;The Colbert Report&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;240&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1954347&quot;&gt;Continuum&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0383126&quot;&gt;MythBusters&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;241&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7665182&quot;&gt;TVF Bachelors&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0445114&quot;&gt;Extras&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;242&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0319969&quot;&gt;Carnivàle&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0165598&quot;&gt;That &apos;70s Show&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;243&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3322310&quot;&gt;Iron Fist&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0452046&quot;&gt;Criminal Minds&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;244&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0312172&quot;&gt;Monk&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0866442&quot;&gt;Eastbound &amp;amp; Down&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;245&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0083466&quot;&gt;Police Squad!&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7016936&quot;&gt;Killing Eve&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;246&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0158552&quot;&gt;Charmed&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1219024&quot;&gt;Castle&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;247&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1910272&quot;&gt;Steins;Gate&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2384811&quot;&gt;Utopia&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;248&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1985443&quot;&gt;Hatfields &amp;amp; McCoys&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0840196&quot;&gt;Skins&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;249&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0262150&quot;&gt;Black Books&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7493974&quot;&gt;Bodyguard&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;

&lt;tr&gt;
&lt;td&gt;250&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5924572&quot;&gt;Deadly Class&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;7.7&lt;/td&gt;
&lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3428912&quot;&gt;Happy Valley&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;

&lt;/div&gt;
&lt;p&gt;&lt;button class=&quot;button&quot; id=&quot;bot-button-2&quot; onclick=&quot;expand2()&quot; style=&quot;display: none&quot;&gt;Expand&lt;/button&gt;&lt;/p&gt;

&lt;p&gt;How does this compare to the original &lt;a href=&quot;https://www.imdb.com/chart/toptv&quot;&gt;IMDb top 250 TV&lt;/a&gt;? Some general observations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Top shows on IMDb’s show-rating list generally appear near the top of my episode-rating list.&lt;/li&gt;
  &lt;li&gt;Mini-series usually receive substantially lower episode ratings than show ratings. On the show-rating list, seven of the top 10 are mini-series (&lt;em&gt;Chernobyl&lt;/em&gt;, &lt;em&gt;Band of Brothers&lt;/em&gt;, &lt;em&gt;Planet Earth&lt;/em&gt;, &lt;em&gt;Cosmos&lt;/em&gt;, &lt;em&gt;Blue Planet II&lt;/em&gt;, &lt;em&gt;Our Planet&lt;/em&gt;), but only two make it onto the episode-rating list (&lt;em&gt;Chernobyl&lt;/em&gt; and &lt;em&gt;Les Synaudes&lt;/em&gt;).&lt;/li&gt;
  &lt;li&gt;The episode-rating list lets some obscure shows sneak to the top. &lt;em&gt;Vader: A Star Wars Theory Fan Series&lt;/em&gt; gets the #12 spot. My guess is that’s because it only has one episode, and because the demographic that watches the show is disproportionately likely to vote on IMDb. Perhaps we could correct for this by using an episode-weighting algorithm similar to the way we do vote-weighting, where the algorithm is more willing to give a show a high rating if it has a lot of episodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If we look at vote distributions, it appears that episodes are more vulnerable to vote stuffing than shows. Compare the vote distribution for &lt;a href=&quot;https://www.imdb.com/title/tt0795176/ratings?ref_=tt_ov_rt&quot;&gt;Planet Earth&lt;/a&gt; versus the highest-rated &lt;a href=&quot;https://www.imdb.com/title/tt0797603/ratings?ref_=tt_ov_rt&quot;&gt;single episode&lt;/a&gt; (“Pole to Pole”):&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/planet-earth-show-dist.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/planet-earth-episode-dist.png&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The show itself has 8.9% one-star reviews, but this episode has 22.6%. I find it difficult to believe that 22.6% of viewers hated the episode that much. More likely, some groups of people are trying to manipulate IMDb ratings by giving mass one-star reviews. And perhaps they try harder on episode ratings than on show ratings, or shows are harder to manipulate because they get more organic votes, or IMDb’s anti-vote stuffing algorithms are more effective for shows than for episodes.&lt;/p&gt;

&lt;h2 id=&quot;what-if-we-used-a-lower-vote-minimum&quot;&gt;What if we used a lower vote minimum?&lt;/h2&gt;

&lt;p&gt;I originally used 1000 as the vote minimum because it’s a reasonable round number, and it looks fairly similar to how IMDb’s weighting works. But there’s an argument that 1000 is too high. Plenty of fairly popular shows have episodes with fewer than 1000 total votes–people on IMDb tend to rate movies and shows much more often than they rate single episodes.&lt;/p&gt;

&lt;p&gt;There’s a tradeoff in deciding how strong our vote weighting should be. If we make the prior too strong by setting a high vote minimum, small but good shows get crowded out by popular shows that might not be as high-quality. But if we set too low a vote minimum, the top list becomes dominated by obscure shows that probably aren’t actually good.&lt;/p&gt;

&lt;p&gt;We can look at how the top list changes when we reduce the vote minimum from one thousand to one hundred. For comparison, the left two columns show the shows and ratings with a 100-vote minimum, and the right two columns use the 1000-vote minimum that our previous list used.&lt;/p&gt;

&lt;p&gt;&lt;button class=&quot;button&quot; id=&quot;top-button-3&quot; onclick=&quot;expand3()&quot;&gt;Expand&lt;/button&gt;&lt;/p&gt;
&lt;div class=&quot;collapsible-content&quot; id=&quot;collapsible3&quot;&gt;
&lt;table&gt;
    &lt;tr&gt;
        &lt;th&gt;&lt;/th&gt;
        &lt;th&gt;Title (100-vote minimum)&lt;/th&gt;
        &lt;th&gt;Rating&lt;/th&gt;
        &lt;th&gt;Title (1000-vote minimum)&lt;/th&gt;
        &lt;th&gt;Rating&lt;/th&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
        &lt;td&gt;1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7366338&quot;&gt;Chernobyl&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7366338&quot;&gt;Chernobyl&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3921820&quot;&gt;Les Synaudes&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0944947&quot;&gt;Game of Thrones&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2560140&quot;&gt;Attack on Titan&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0903747&quot;&gt;Breaking Bad&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3911580&quot;&gt;BoOzy&apos; OS&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3322312&quot;&gt;Daredevil&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9475088&quot;&gt;Vader: A Star Wars Theory Fan Series&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1475582&quot;&gt;Sherlock&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7137906&quot;&gt;When They See Us&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2560140&quot;&gt;Attack on Titan&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4517806&quot;&gt;Furusato-Time&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4574334&quot;&gt;Stranger Things&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3322312&quot;&gt;Daredevil&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3921820&quot;&gt;Les Synaudes&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;9&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4452630&quot;&gt;Code Black&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;9.0&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2699128&quot;&gt;The Leftovers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;10&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2699128&quot;&gt;The Leftovers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.9&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2243973&quot;&gt;Hannibal&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;11&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1839578&quot;&gt;Person of Interest&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.9&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1839578&quot;&gt;Person of Interest&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;12&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4145054&quot;&gt;Shadowhunters&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.9&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9475088&quot;&gt;Vader: A Star Wars Theory Fan Series&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;13&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0903747&quot;&gt;Breaking Bad&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.9&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7137906&quot;&gt;When They See Us&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;14&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0944947&quot;&gt;Game of Thrones&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.9&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2356777&quot;&gt;True Detective&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;15&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2243973&quot;&gt;Hannibal&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.9&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2431438&quot;&gt;Sense8&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;16&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4508902&quot;&gt;One Punch Man&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3911580&quot;&gt;BoOzy&apos; OS&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;17&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2632424&quot;&gt;The Originals&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5675620&quot;&gt;The Punisher&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;18&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098887&quot;&gt;Parenthood&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0185906&quot;&gt;Band of Brothers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;19&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8416494&quot;&gt;Doom Patrol&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0411008&quot;&gt;Lost&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;20&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1475582&quot;&gt;Sherlock&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2707408&quot;&gt;Narcos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;21&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2431438&quot;&gt;Sense8&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4145054&quot;&gt;Shadowhunters&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;22&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0072500&quot;&gt;Fawlty Towers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0303461&quot;&gt;Firefly&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;23&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1598754&quot;&gt;24: Live Another Day&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0773262&quot;&gt;Dexter&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;24&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4574334&quot;&gt;Stranger Things&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0141842&quot;&gt;The Sopranos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;25&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3909280&quot;&gt;Flat TV&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.8&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4093826&quot;&gt;Twin Peaks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;26&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3487382&quot;&gt;Forever&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4508902&quot;&gt;One Punch Man&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;27&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2657262&quot;&gt;Star-Crossed&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0072500&quot;&gt;Fawlty Towers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;28&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0193676&quot;&gt;Freaks and Geeks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0108778&quot;&gt;Friends&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;29&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3488298&quot;&gt;American Crime&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1598754&quot;&gt;24: Live Another Day&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;30&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5675620&quot;&gt;The Punisher&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8416494&quot;&gt;Doom Patrol&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;31&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1984119&quot;&gt;Feud&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1124373&quot;&gt;Sons of Anarchy&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;32&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2707408&quot;&gt;Narcos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4452630&quot;&gt;Code Black&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;33&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8595766&quot;&gt;Yeh Meri Family&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0193676&quot;&gt;Freaks and Geeks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;34&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9471962&quot;&gt;Halka&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0412142&quot;&gt;House&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;35&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096694&quot;&gt;Saved by the Bell&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2017109&quot;&gt;Banshee&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;36&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2548038&quot;&gt;Banking Bad&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8127644&quot;&gt;The Adventures of Sherlock Holmes and Dr. Watson&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;37&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1567432&quot;&gt;Teen Wolf&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2375692&quot;&gt;Black Sails&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;38&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2017109&quot;&gt;Banshee&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3487382&quot;&gt;Forever&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;39&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1227926&quot;&gt;Dr. Horrible&apos;s Sing-Along Blog&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2628232&quot;&gt;Penny Dreadful&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;40&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1836037&quot;&gt;Longmire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1695360&quot;&gt;The Legend of Korra&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;41&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0411008&quot;&gt;Lost&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7493974&quot;&gt;Bodyguard&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;42&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4742876&quot;&gt;TVF Pitchers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098936&quot;&gt;Twin Peaks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;43&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3148266&quot;&gt;12 Monkeys&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.7&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2632424&quot;&gt;The Originals&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;44&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0141842&quot;&gt;The Sopranos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4189022&quot;&gt;Ash vs Evil Dead&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;45&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2543328&quot;&gt;TURN: Washington&apos;s Spies&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4635276&quot;&gt;Master of None&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;46&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1695360&quot;&gt;The Legend of Korra&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0417299&quot;&gt;Avatar: The Last Airbender&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;47&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096548&quot;&gt;Blackadder Goes Forth&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2188671&quot;&gt;Bates Motel&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;48&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0303461&quot;&gt;Firefly&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4635282&quot;&gt;The OA&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;49&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2375692&quot;&gt;Black Sails&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3749900&quot;&gt;Gotham&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;50&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0185906&quot;&gt;Band of Brothers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1865718&quot;&gt;Gravity Falls&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;51&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3232262&quot;&gt;Sonic Boom&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2649356&quot;&gt;Sharp Objects&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;52&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1865718&quot;&gt;Gravity Falls&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2879552&quot;&gt;11.22.63&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;53&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2356777&quot;&gt;True Detective&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0979432&quot;&gt;Boardwalk Empire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;54&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0401969&quot;&gt;Playing It Straight&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1567432&quot;&gt;Teen Wolf&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;55&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1124373&quot;&gt;Sons of Anarchy&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4230076&quot;&gt;The Defenders&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;56&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9130692&quot;&gt;Dynasties&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1984119&quot;&gt;Feud&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;57&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5687612&quot;&gt;Fleabag&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1474684&quot;&gt;Luther&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;58&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2188671&quot;&gt;Bates Motel&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0804503&quot;&gt;Mad Men&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;59&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2874692&quot;&gt;When Calls the Heart&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1870479&quot;&gt;The Newsroom&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;60&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096657&quot;&gt;Mr. Bean&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1439629&quot;&gt;Community&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;61&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2465494&quot;&gt;The High School&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1869454&quot;&gt;Good Omens&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;62&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0773262&quot;&gt;Dexter&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098904&quot;&gt;Seinfeld&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;63&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4189022&quot;&gt;Ash vs Evil Dead&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096657&quot;&gt;Mr. Bean&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;64&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0417299&quot;&gt;Avatar: The Last Airbender&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0384766&quot;&gt;Rome&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;65&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1602156&quot;&gt;Atlantis Rising: The Making of &apos;Atlantis Down&apos;&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0290978&quot;&gt;The Office&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;66&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2136443&quot;&gt;The Playlist&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.6&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2177461&quot;&gt;A Discovery of Witches&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;67&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0934814&quot;&gt;Chuck&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0934814&quot;&gt;Chuck&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;68&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4093826&quot;&gt;Twin Peaks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4742876&quot;&gt;TVF Pitchers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;69&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081846&quot;&gt;Cosmos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2234222&quot;&gt;Orphan Black&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;70&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4635282&quot;&gt;The OA&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0092337&quot;&gt;Dekalog&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;71&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1489428&quot;&gt;Justified&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1227926&quot;&gt;Dr. Horrible&apos;s Sing-Along Blog&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;72&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2193041&quot;&gt;Beauty and the Beast&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1119644&quot;&gt;Fringe&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;73&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2177461&quot;&gt;A Discovery of Witches&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1405406&quot;&gt;The Vampire Diaries&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;74&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1870479&quot;&gt;The Newsroom&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5687612&quot;&gt;Fleabag&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;75&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4317218&quot;&gt;Wake Up Call&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2657262&quot;&gt;Star-Crossed&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;76&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4635276&quot;&gt;Master of None&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0348914&quot;&gt;Deadwood&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;77&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0290978&quot;&gt;The Office&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096548&quot;&gt;Blackadder Goes Forth&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;78&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0979432&quot;&gt;Boardwalk Empire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0306414&quot;&gt;The Wire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;79&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0412142&quot;&gt;House&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0386676&quot;&gt;The Office&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;80&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2234222&quot;&gt;Orphan Black&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3148266&quot;&gt;12 Monkeys&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;81&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2211065&quot;&gt;The Other Sport&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5511582&quot;&gt;Timeless&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;82&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2628232&quot;&gt;Penny Dreadful&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3475734&quot;&gt;Agent Carter&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;83&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2937900&quot;&gt;The Knick&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1489428&quot;&gt;Justified&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;84&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8127644&quot;&gt;The Adventures of Sherlock Holmes and Dr. Watson&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0460649&quot;&gt;How I Met Your Mother&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;85&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0348914&quot;&gt;Deadwood&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1843230&quot;&gt;Once Upon a Time&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;86&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1869454&quot;&gt;Good Omens&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5435008&quot;&gt;Gilmore Girls: A Year in the Life&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;87&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5271842&quot;&gt;De Terugkeer van&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1442449&quot;&gt;Spartacus&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;88&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0108778&quot;&gt;Friends&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1856010&quot;&gt;House of Cards&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;89&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1103987&quot;&gt;Leverage&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2937900&quot;&gt;The Knick&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;90&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3012698&quot;&gt;Olive Kitteridge&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1836037&quot;&gt;Longmire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;91&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0314979&quot;&gt;Battlestar Galactica&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6586318&quot;&gt;Patrick Melrose&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;92&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1405406&quot;&gt;The Vampire Diaries&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.5&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0285331&quot;&gt;24&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;93&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0075520&quot;&gt;Jesus of Nazareth&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2149175&quot;&gt;The Americans&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;94&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5651844&quot;&gt;Travelers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1327801&quot;&gt;Glee&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.0&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;95&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0285331&quot;&gt;24&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5516154&quot;&gt;Godless&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;96&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3749900&quot;&gt;Gotham&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0367279&quot;&gt;Arrested Development&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;97&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0384766&quot;&gt;Rome&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1606375&quot;&gt;Downton Abbey&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;98&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0804503&quot;&gt;Mad Men&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5651844&quot;&gt;Travelers&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;99&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2384811&quot;&gt;Utopia&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1266020&quot;&gt;Parks and Recreation&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;100&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0111987&quot;&gt;Goosebumps&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2249364&quot;&gt;Broadchurch&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;101&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1474684&quot;&gt;Luther&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3488298&quot;&gt;American Crime&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;102&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2183404&quot;&gt;Rectify&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3581932&quot;&gt;And Then There Were None&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;103&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1592154&quot;&gt;Nikita&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4145384&quot;&gt;Scream Queens&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;104&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6958022&quot;&gt;The Defiant Ones&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2357547&quot;&gt;Jessica Jones&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;105&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0790772&quot;&gt;Rules of Engagement&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0314979&quot;&gt;Battlestar Galactica&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;106&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098904&quot;&gt;Seinfeld&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3718778&quot;&gt;Over the Garden Wall&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;107&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3718778&quot;&gt;Over the Garden Wall&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2085059&quot;&gt;Black Mirror&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;108&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1830617&quot;&gt;Grimm&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0487831&quot;&gt;The IT Crowd&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;109&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2402207&quot;&gt;The Last Ship&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1830617&quot;&gt;Grimm&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;110&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5511582&quot;&gt;Timeless&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3489184&quot;&gt;Constantine&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;111&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2149175&quot;&gt;The Americans&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2211065&quot;&gt;The Other Sport&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;112&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6586318&quot;&gt;Patrick Melrose&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0162065&quot;&gt;Angel&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;113&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7493974&quot;&gt;Bodyguard&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1637727&quot;&gt;The Killing&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;114&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1327801&quot;&gt;Glee&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2193041&quot;&gt;Beauty and the Beast&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;115&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2297757&quot;&gt;Nathan for You&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0106179&quot;&gt;The X-Files&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;116&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4145384&quot;&gt;Scream Queens&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5368542&quot;&gt;The Exorcist&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;117&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5425186&quot;&gt;Horace and Pete&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1219024&quot;&gt;Castle&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;118&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7146600&quot;&gt;Wolf&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2384811&quot;&gt;Utopia&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;119&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6466208&quot;&gt;Breathe&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3655448&quot;&gt;The Young Pope&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;120&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1843230&quot;&gt;Once Upon a Time&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2402207&quot;&gt;The Last Ship&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;121&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098936&quot;&gt;Twin Peaks&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0445114&quot;&gt;Extras&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;122&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2649356&quot;&gt;Sharp Objects&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0118276&quot;&gt;Buffy the Vampire Slayer&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;123&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0118421&quot;&gt;Oz&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0279600&quot;&gt;Smallville&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.9&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;124&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5620076&quot;&gt;Chance&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5189670&quot;&gt;Making a Murderer&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;125&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7665182&quot;&gt;TVF Bachelors&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0374463&quot;&gt;The Pacific&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;126&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1837642&quot;&gt;Revenge&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0118421&quot;&gt;Oz&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;127&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0368530&quot;&gt;One Tree Hill&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0248654&quot;&gt;Six Feet Under&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;128&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1606375&quot;&gt;Downton Abbey&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1837642&quot;&gt;Revenge&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;129&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3766376&quot;&gt;Vice Principals&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3012698&quot;&gt;Olive Kitteridge&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;130&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2879552&quot;&gt;11.22.63&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0111987&quot;&gt;Goosebumps&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;131&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1439629&quot;&gt;Community&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5580540&quot;&gt;Santa Clarita Diet&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;132&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0995832&quot;&gt;Generation Kill&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2094262&quot;&gt;Da Vinci&apos;s Demons&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;133&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2581458&quot;&gt;Looking&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.4&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0387199&quot;&gt;Entourage&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;134&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1119644&quot;&gt;Fringe&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4299972&quot;&gt;The Jinx: The Life and Deaths of Robert Durst&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;135&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5368542&quot;&gt;The Exorcist&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4834206&quot;&gt;A Series of Unfortunate Events&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;136&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0925266&quot;&gt;Pushing Daisies&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1103987&quot;&gt;Leverage&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;137&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8682948&quot;&gt;The Act&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4422836&quot;&gt;Limitless&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;138&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0135096&quot;&gt;The Living Planet&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0088484&quot;&gt;Blackadder II&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;139&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7550774&quot;&gt;Black and White Love&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4047038&quot;&gt;Dirk Gently&apos;s Holistic Detective Agency&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;140&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3865236&quot;&gt;Into the Badlands&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2874692&quot;&gt;When Calls the Heart&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;141&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0237123&quot;&gt;Coupling&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0092324&quot;&gt;Blackadder the Third&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;142&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0071075&quot;&gt;The World at War&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3865236&quot;&gt;Into the Badlands&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;143&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5011816&quot;&gt;Sneaky Pete&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0079902&quot;&gt;Sherlock Holmes and Doctor Watson: The Acquaintance&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;144&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0092324&quot;&gt;Blackadder the Third&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0103352&quot;&gt;Are You Afraid of the Dark?&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;145&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0279600&quot;&gt;Smallville&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0995832&quot;&gt;Generation Kill&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;146&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1637727&quot;&gt;The Killing&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2543328&quot;&gt;TURN: Washington&apos;s Spies&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;147&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4299972&quot;&gt;The Jinx: The Life and Deaths of Robert Durst&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2294189&quot;&gt;The Fall&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;148&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8694364&quot;&gt;Years and Years&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2654580&quot;&gt;Almost Human&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;149&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3581932&quot;&gt;And Then There Were None&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4517806&quot;&gt;Furusato-Time&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;150&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4643084&quot;&gt;Counterpart&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0491738&quot;&gt;Psych&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;151&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1724587&quot;&gt;Green Lantern: The Animated Series&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4643084&quot;&gt;Counterpart&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;152&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2070443&quot;&gt;SheZow&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3322314&quot;&gt;Luke Cage&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;153&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5516154&quot;&gt;Godless&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1592154&quot;&gt;Nikita&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;154&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0088484&quot;&gt;Blackadder II&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2136443&quot;&gt;The Playlist&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;155&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5435008&quot;&gt;Gilmore Girls: A Year in the Life&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3766376&quot;&gt;Vice Principals&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;156&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0106112&quot;&gt;Return to Lonesome Dove&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0925266&quot;&gt;Pushing Daisies&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;157&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3877200&quot;&gt;The Missing&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2183404&quot;&gt;Rectify&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;158&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1219024&quot;&gt;Castle&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1196946&quot;&gt;The Mentalist&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;159&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0103352&quot;&gt;Are You Afraid of the Dark?&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096694&quot;&gt;Saved by the Bell&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;160&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2155025&quot;&gt;Please Like Me&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2262456&quot;&gt;The Hollow Crown&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;161&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4322992&quot;&gt;The Norman Conquests&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9130692&quot;&gt;Dynasties&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;162&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2249364&quot;&gt;Broadchurch&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0285403&quot;&gt;Scrubs&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;163&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3655448&quot;&gt;The Young Pope&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9253866&quot;&gt;Our Planet&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;164&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0387199&quot;&gt;Entourage&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8595766&quot;&gt;Yeh Meri Family&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;165&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0445114&quot;&gt;Extras&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1759761&quot;&gt;Veep&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;166&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6396252&quot;&gt;Ölene Kadar&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1492966&quot;&gt;Louie&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;167&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0070644&quot;&gt;Scenes from a Marriage&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5425186&quot;&gt;Horace and Pete&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;168&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5167198&quot;&gt;The Hunt&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0407362&quot;&gt;Battlestar Galactica&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;169&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4230076&quot;&gt;The Defenders&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081846&quot;&gt;Cosmos&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;170&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1699748&quot;&gt;Hell on Wheels&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8682948&quot;&gt;The Act&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;171&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5580540&quot;&gt;Santa Clarita Diet&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0187664&quot;&gt;Spaced&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;172&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7927936&quot;&gt;Flames&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3877200&quot;&gt;The Missing&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;173&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6040674&quot;&gt;Waco&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5011816&quot;&gt;Sneaky Pete&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.8&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;174&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3475734&quot;&gt;Agent Carter&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2654620&quot;&gt;The Strain&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;175&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081912&quot;&gt;Only Fools and Horses....&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8694364&quot;&gt;Years and Years&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;176&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098825&quot;&gt;House of Cards&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0805663&quot;&gt;Jericho&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;177&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5189670&quot;&gt;Making a Murderer&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0237123&quot;&gt;Coupling&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;178&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1266020&quot;&gt;Parks and Recreation&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.3&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2189461&quot;&gt;Marco Polo&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;179&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0187664&quot;&gt;Spaced&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0149460&quot;&gt;Futurama&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;180&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1759761&quot;&gt;Veep&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1199099&quot;&gt;Merlin&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;181&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0830361&quot;&gt;The Lost Room&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1135300&quot;&gt;Dollhouse&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;182&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7078180&quot;&gt;Violet Evergarden&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1699748&quot;&gt;Hell on Wheels&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;183&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0278238&quot;&gt;Samurai Jack&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2581458&quot;&gt;Looking&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;184&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0071007&quot;&gt;Little House on the Prairie&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2297757&quot;&gt;Nathan for You&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;185&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6587094&quot;&gt;Five Came Back&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1733785&quot;&gt;Bron/Broen&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;186&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0809488&quot;&gt;The Godfather Saga&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0278238&quot;&gt;Samurai Jack&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;187&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2094262&quot;&gt;Da Vinci&apos;s Demons&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4396630&quot;&gt;The Gifted&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;188&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4592410&quot;&gt;The Get Down&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098887&quot;&gt;Parenthood&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;189&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0491738&quot;&gt;Psych&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1220617&quot;&gt;The Inbetweeners&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;190&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3673794&quot;&gt;Kingdom&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4592410&quot;&gt;The Get Down&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;191&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0111958&quot;&gt;Father Ted&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3909280&quot;&gt;Flat TV&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;192&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7765404&quot;&gt;Wayne&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2618986&quot;&gt;Wayward Pines&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;193&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0367279&quot;&gt;Arrested Development&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4122068&quot;&gt;Humans&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;194&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8809646&quot;&gt;College Romance&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8679236&quot;&gt;Who Is America?&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;195&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0096639&quot;&gt;Lonesome Dove&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0863046&quot;&gt;Flight of the Conchords&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;196&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4047038&quot;&gt;Dirk Gently&apos;s Holistic Detective Agency&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5580146&quot;&gt;Maniac&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;197&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0863046&quot;&gt;Flight of the Conchords&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6040674&quot;&gt;Waco&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;198&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0386676&quot;&gt;The Office&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7520794&quot;&gt;Russian Doll&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;199&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6474174&quot;&gt;Kevin (Probably) Saves the World&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0200276&quot;&gt;The West Wing&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;200&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5574940&quot;&gt;The New Edition Story&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2070791&quot;&gt;Revolution&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;201&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2548418&quot;&gt;Jonathan Strange &amp;amp; Mr Norrell&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0086661&quot;&gt;The Adventures of Sherlock Holmes&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;202&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2660734&quot;&gt;The Tomorrow People&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2930604&quot;&gt;Star Wars Rebels&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;203&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0485842&quot;&gt;Studio 60 on the Sunset Strip&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6466208&quot;&gt;Breathe&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;204&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0306414&quot;&gt;The Wire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0790772&quot;&gt;Rules of Engagement&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;205&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1660055&quot;&gt;Scooby-Doo! Mystery Incorporated&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6877772&quot;&gt;American Vandal&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;206&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1199099&quot;&gt;Merlin&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2543312&quot;&gt;Halt and Catch Fire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;207&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9253866&quot;&gt;Our Planet&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0075520&quot;&gt;Jesus of Nazareth&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;208&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3215364&quot;&gt;Battle Creek&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2452242&quot;&gt;Happy!&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;209&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3489184&quot;&gt;Constantine&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6958022&quot;&gt;The Defiant Ones&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;210&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0074006&quot;&gt;I, Claudius&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6866266&quot;&gt;Escape at Dannemora&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;211&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2654580&quot;&gt;Almost Human&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0830361&quot;&gt;The Lost Room&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;212&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098769&quot;&gt;The Civil War&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0472027&quot;&gt;John Adams&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;213&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0310455&quot;&gt;Foyle&apos;s War&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3186130&quot;&gt;Vinyl&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;214&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7849864&quot;&gt;Grey&apos;s Anatomy: B-Team&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3232262&quot;&gt;Sonic Boom&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;215&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5514358&quot;&gt;Steins;Gate 0&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1578873&quot;&gt;Pretty Little Liars&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;216&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6461736&quot;&gt;The Brave&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4209256&quot;&gt;Colony&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;217&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0106141&quot;&gt;South Beach&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0081912&quot;&gt;Only Fools and Horses....&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;218&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1655078&quot;&gt;Into the Universe with Stephen Hawking&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0112130&quot;&gt;Pride and Prejudice&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;219&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2543312&quot;&gt;Halt and Catch Fire&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1051220&quot;&gt;The Shannara Chronicles&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;220&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0290988&quot;&gt;Trailer Park Boys&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2660734&quot;&gt;The Tomorrow People&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;221&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4834206&quot;&gt;A Series of Unfortunate Events&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1883092&quot;&gt;Generation War&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;222&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1492966&quot;&gt;Louie&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0310455&quot;&gt;Foyle&apos;s War&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;223&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0092337&quot;&gt;Dekalog&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0904208&quot;&gt;Californication&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;224&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0805663&quot;&gt;Jericho&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5288312&quot;&gt;Skam&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;225&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5288312&quot;&gt;Skam&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7146600&quot;&gt;Wolf&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;226&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1533435&quot;&gt;Miranda&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt6461736&quot;&gt;The Brave&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;227&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5249462&quot;&gt;Erased&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1602156&quot;&gt;Atlantis Rising: The Making of &apos;Atlantis Down&apos;&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;228&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4855114&quot;&gt;Good Behavior&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5137338&quot;&gt;Victoria&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;229&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0487831&quot;&gt;The IT Crowd&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1358522&quot;&gt;White Collar&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;230&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3305096&quot;&gt;Galavant&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0813715&quot;&gt;Heroes&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;231&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9114372&quot;&gt;Çarpisma&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0111958&quot;&gt;Father Ted&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;232&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0472027&quot;&gt;John Adams&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3663490&quot;&gt;The Librarians&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;233&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2262456&quot;&gt;The Hollow Crown&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2548418&quot;&gt;Jonathan Strange &amp;amp; Mr Norrell&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;234&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt8858472&quot;&gt;Daniel Sloss: Live Shows&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2548038&quot;&gt;Banking Bad&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;235&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt4422836&quot;&gt;Limitless&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0368530&quot;&gt;One Tree Hill&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;236&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1831164&quot;&gt;Leyla and Mecnun&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5620076&quot;&gt;Chance&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;237&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt2452242&quot;&gt;Happy!&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3305096&quot;&gt;Galavant&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;238&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0043208&quot;&gt;I Love Lucy&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0098825&quot;&gt;House of Cards&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;239&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0460649&quot;&gt;How I Met Your Mother&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3520702&quot;&gt;Bloodline&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;240&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0248654&quot;&gt;Six Feet Under&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1954347&quot;&gt;Continuum&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;241&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5137338&quot;&gt;Victoria&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7665182&quot;&gt;TVF Bachelors&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;242&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt9529500&quot;&gt;Kuzgun&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0319969&quot;&gt;Carnivàle&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;243&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0416394&quot;&gt;The Mighty Boosh&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt3322310&quot;&gt;Iron Fist&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;244&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0103376&quot;&gt;Bottom&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0312172&quot;&gt;Monk&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;245&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0200276&quot;&gt;The West Wing&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0083466&quot;&gt;Police Squad!&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;246&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt7599942&quot;&gt;Whiskey Cavalier&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0158552&quot;&gt;Charmed&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;247&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0108903&quot;&gt;ReBoot&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1910272&quot;&gt;Steins;Gate&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;248&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0112130&quot;&gt;Pride and Prejudice&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.2&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1985443&quot;&gt;Hatfields &amp;amp; McCoys&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;249&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0182627&quot;&gt;Rescue 77&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt0262150&quot;&gt;Black Books&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;

    &lt;tr&gt;
        &lt;td&gt;250&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt1442449&quot;&gt;Spartacus&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;8.1&lt;/td&gt;
        &lt;td&gt;&lt;a href=&quot;https://imdb.com/title/tt5924572&quot;&gt;Deadly Class&lt;/a&gt;&lt;/td&gt;
        &lt;td&gt;7.7&lt;/td&gt;
    &lt;/tr&gt;
&lt;/table&gt;

&lt;/div&gt;
&lt;p&gt;&lt;button class=&quot;button&quot; id=&quot;bot-button-3&quot; onclick=&quot;expand3()&quot; style=&quot;display: none&quot;&gt;Expand&lt;/button&gt;&lt;/p&gt;

&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Any rating system inevitably has bias. When people rate TV shows on IMDb, they might be rating how much they like the idea of the show more than how much they actually like the show, itself. They’re probably less likely to do this for individual episodes because it’s easier to rate an episode based on the actual experience you had watching it. On the other hand, the episode-based ranking appears vulnerable to vote stuffing. It also looks like people are much more likely to rate episodes that they love: for example, the &lt;a href=&quot;https://www.imdb.com/title/tt9906260/ratings?ref_=tt_ov_rt&quot;&gt;top-rated episode of Attack on Titan&lt;/a&gt; has over 20,000 votes, but a &lt;a href=&quot;https://www.imdb.com/title/tt6723672/ratings?ref_=tt_ov_rt&quot;&gt;typical episode&lt;/a&gt; only has about 2000. And both of these systems share a big source of bias: the ratings come from IMDb users, and IMDb users do not form a representative sample of the TV-watching population.&lt;/p&gt;

&lt;p&gt;Given that every system has its own bias, I believe it’s valuable to look at different rankings, consider their differences, and think about how their biases may or may not reflect your own preferences so that you can find the rating system that’s the most helpful for you.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In this particular case, the low ratings probably come from vote stuffing–the episodes have way more 1-star ratings than you’d naturally expect them to get. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Database downloaded and screenshots taken on 2019-07-12. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This list differs somewhat from IMDb’s &lt;a href=&quot;https://www.imdb.com/chart/toptv&quot;&gt;official top 250 list&lt;/a&gt; because IMDb uses more complex algorithms to try to counteract vote stuffing. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
			<item>
				<title>High School Science Experiments</title>
				<pubDate>Sat, 08 Jun 2019 00:00:00 -0700</pubDate>
				<link>http://mdickens.me/2019/06/08/high_school_science_experiments/</link>
				<guid isPermaLink="true">http://mdickens.me/2019/06/08/high_school_science_experiments/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Experiments are a critical part of science—perhaps even the central feature. But middle school and high school science experiments don’t teach students how experiments are supposed to work.&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;The way I did science experiments in school went something like this:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Learn about some natural phenomenon.&lt;/li&gt;
  &lt;li&gt;Teacher explains an experiment intended to test the natural phenomenon or at least vaguely relate to it.&lt;/li&gt;
  &lt;li&gt;We run the experiment, with the ostensible goal of observing the natural phenomenon.&lt;/li&gt;
  &lt;li&gt;We get results totally different from what the laws of nature predict.&lt;/li&gt;
  &lt;li&gt;Whatever, let’s move on to the next subject.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For example, I remember in physics class we learned about how acceleration due to gravity changes when an object rolls down an incline based on the steepness of the incline. Then we did an “experiment” to test this by rolling marbles down inclines and measuring how far they got in a fixed amount of time. The results we got were inconsistent with the laws of mechanics, but nobody questioned ths. We all assumed that our experiment was not sufficiently well-controlled to produce reliable results (which was accurate).&lt;/p&gt;

&lt;p&gt;This is the antithesis of how experiments are supposed to work. The point of running an experiment is to learn something about the world. Experiments should be well controlled so you can be confident that you are learning something.&lt;/p&gt;

&lt;p&gt;Running a good experiment is not easy. Experiments can easily &lt;a href=&quot;https://en.wikipedia.org/wiki/Replication_crisis&quot;&gt;fail to produce good results&lt;/a&gt;, so they must be designed carefully. Designing good experiments is a skill. And the way experiments are done in school does nothing to teach this skill.&lt;/p&gt;

&lt;p&gt;If you know in advance that you have bad methodology and you’re going to throw away the results of your experiment, what’s the point? Experiments as they are done in school don’t teach about natural laws (because you ignore whatever results you get), and they don’t teach how to design good experiments (because no effort is made to produce consistent results).&lt;/p&gt;

&lt;p&gt;I can imagine an effective science class that focused on teaching students how to design experiments. You could perhaps start by providing students a simple natural law, such as an object’s acceleration on an inclined plane, then challenge them to produce an experiment that replicates the results. If they don’t produce consistent results, push them to figure out why, and refine their experimental conditions until they can get reliable measurements.&lt;/p&gt;

&lt;p&gt;But the point of an experiment isn’t (usually) to reproduce known results—it’s to figure out something unknown. A good experiment should be able to falsify a hypothesis; you shouldn’t just keep changing your experiment until you get the expected results. (The process I described in the previous paragraph is basically P-hacking.) I don’t know how you would teach people to get from “design an experiment that can consistently replicate a known natural law” to “design an experiment that can tell you something you don’t already know, and be confident that it’s correct.” But I’ve only been thinking about it for a few minutes. We are collectively wasting tens of millions of hours per year having students run experiments while learning nothing about how to run experiments, and I’m sure we can do better.&lt;/p&gt;

&lt;p&gt;Let me throw out a slightly more sophisticated idea for how to teach experiments. Give students a natural phenomenon to investigate; it should be something they probably don’t already know (so they don’t know what result to expect), but that isn’t too hard to test. Divide the students into groups and have them design and implement experiments to figure out the phenomenon. Then challenge them to peer review each other’s experiments and look for flaws. Refine the experiments until most of the class agrees on the correct methodology and can replicate each other’s results.&lt;/p&gt;

&lt;p&gt;This also provides a natural way to teach students statistics. If you need to develop good experimental methodologies, you need to have a way of knowing how reliable your results are and how many trials to run. Some students will try to understand how to do this, and as they begin to think more deeply about it, they will inevitably ask the same questions that inferential statistics is meant to answer. This is the perfect time to equip them with some statistics knowledge that they can use to improve their understanding of science.&lt;/p&gt;

&lt;p&gt;I’m tempted to get overzealous about how significant it would be if we consistently ran science classes this way. I would like to say that it would solve the replication crisis, bring an end to shoddy news reporting, and revolutionize politics. Probably none of that would happen, and maybe this whole thing isn’t even a good idea. I’m just theorizing, I haven’t tested any of these ideas experimentally.&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>How Can Donors Incentivize Good Predictions on Important but Unpopular Topics?</title>
				<pubDate>Sat, 02 Feb 2019 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2019/02/02/how_can_donors_incentivize_good_predictions/</link>
				<guid isPermaLink="true">http://mdickens.me/2019/02/02/how_can_donors_incentivize_good_predictions/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;Altruists often would like to get good predictions on questions that don’t necessarily have great market significance. For example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Will a replication of a study of cash transfers show similar results?&lt;/li&gt;
  &lt;li&gt;How much money will GiveWell move in the next five years?&lt;/li&gt;
  &lt;li&gt;If cultured meat were price-competitive, what percent of consumers would prefer to buy it over conventional meat?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a donor would like to give money to help make better predictions, how can they do that?&lt;/p&gt;

&lt;!-- more --&gt;

&lt;p&gt;You can’t just pay people to make predictions, because there’s no incentive for their predictions to actually be accurate and well-calibrated. One step better would be to pay out only if their predictions are correct, but that still incentivizes people who may be uninformed to make predictions because there’s no downside to being wrong.&lt;/p&gt;

&lt;p&gt;Another idea is to offer to make large bets, so that your counterparty can make a lot of money for being right, but they also want to avoid being wrong. That would incentivize people to actually do research and figure out how to make money off of betting against you. This idea, however, doesn’t necessarily give you great probability estimates because you still have to pick a probability at which to offer a bet. For example, if you offer to make a large bet at 50% odds and someone takes you up on it, then that could mean they believe the true probability is 60% or 99%, and you don’t have any great way of knowing which.&lt;/p&gt;

&lt;p&gt;You could get around this by offering lots of bets at varying odds on the same question. That would technically work, but it’s probably a lot more expensive than necessary. A slightly cheaper method would be to determine the “true” probability estimate by binary search: offer to bet either side at 50%; if someone takes the “yes” side, offer again at 75%; if they then take the “no” side, offer at 62.5%; continue until you have reached satisfactory precision. This is still pretty expensive.&lt;/p&gt;

&lt;p&gt;In theory, if you create a prediction market, people will be willing to bet lots of money whenever they think they can outperform the market. You might be able to start up an accurate prediction market by seeding it with your own predictions; then savvy newcomers will come and bet with you; then even savvier investors will come and bet with &lt;em&gt;them&lt;/em&gt;; and the predictions will get more and more accurate. I’m not sure that’s how it would work out in practice. And anyway, the biggest problem with this approach is that (in the US and the UK) prediction markets are heavily restricted because they’re considered similar to gambling. I’m not well-informed about the theory or practice of prediction markets, so there might be clever ways of incentivizing good predictions that I don’t know about.&lt;/p&gt;

&lt;p&gt;Anthony Aguirre (co-founder of &lt;a href=&quot;http://metaculus.com/&quot;&gt;Metaculus&lt;/a&gt;, a website for making predictions), proposed paying people based on their track record: people with a history of making good predictions get paid to make more predictions. This incentivizes people to establish and maintain a track record of making good predictions, even though they don’t get paid directly for accurate predictions per se.&lt;/p&gt;

&lt;p&gt;Aguirre has said that Metaculus may implement this incentive structure at some point in the future. I would be interested to see how it plays out and whether it turns out to be a useful engine for generating good predictions.&lt;/p&gt;

&lt;p&gt;One practical option, which goes back to the first idea I mentioned, is to pay a group of good forecasters like the &lt;a href=&quot;https://goodjudgment.com/&quot;&gt;Good Judgment Project&lt;/a&gt; (GJP). In theory, they don’t have a strong incentive to make good predictions, but they did win &lt;a href=&quot;https://en.wikipedia.org/wiki/Intelligence_Advanced_Research_Projects_Activity&quot;&gt;IARPA&lt;/a&gt;’s 2013 &lt;a href=&quot;https://en.wikipedia.org/wiki/Aggregative_Contingent_Estimation_(ACE)_Program&quot;&gt;forecasting contest&lt;/a&gt;, so in practice it seems to work. I haven’t looked into how exactly to get predictions from GJP, but it might be a reasonable way of converting money into knowledge.&lt;/p&gt;

&lt;p&gt;Based on my limited research, it looks like donors may be able to incentivize donations reasonably effectively with a consulting service like GJP, or perhaps by doing something involving predictions markets, although I’m not sure what. I still have some big open questions:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;What is the best way to get good predictions?&lt;/li&gt;
  &lt;li&gt;How much does a good prediction cost? How does the cost vary with the type of prediction? With the accuracy and precision?&lt;/li&gt;
  &lt;li&gt;How accurate can predictions be? What about relatively long-term predictions?&lt;/li&gt;
  &lt;li&gt;Assuming it’s possible to get good predictions, what are the best types of questions to ask, given the tradeoff between importance and predict-ability?&lt;/li&gt;
  &lt;li&gt;Is it possible to get good predictions from prediction markets, given the current state of regulations?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Discuss on the &lt;a href=&quot;https://forum.effectivealtruism.org/posts/a6qfuaPa8qefmA2XE/how-can-donors-incentivize-good-predictions-on-important-but&quot;&gt;Effective Altruism Forum&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;



                </description>
			</item>
		
			<item>
				<title>Should Global Poverty Donors Give Now or Later?</title>
				<pubDate>Mon, 21 Jan 2019 00:00:00 -0800</pubDate>
				<link>http://mdickens.me/2019/01/21/should_global_poverty_donors_give_now_or_later/</link>
				<guid isPermaLink="true">http://mdickens.me/2019/01/21/should_global_poverty_donors_give_now_or_later/</guid>
                <description>
                  
                  
                  
                  &lt;p&gt;&lt;strong&gt;Update 2020-01-04:&lt;/strong&gt; This essay contains a number of important mistakes. See &lt;a href=&quot;/2020/01/04/correction_giving_now_vs_later/&quot;&gt;Correction on Giving Now vs. Later.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Disclaimer: I am not an investment advisor and nothing in this essay serves as investment advice.&lt;/em&gt;&lt;/p&gt;

&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;

&lt;p&gt;Robin Hanson: &lt;a href=&quot;http://www.overcomingbias.com/2013/04/more-now-means-less-later.html&quot;&gt;If More Now, Less Later&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rate of return on investment historically has been higher than the growth rate–or, as they say, &lt;code&gt;r &amp;gt; g&lt;/code&gt;. If you save your money to donate later, you can earn enough interest on it that you eventually have the funds to donate a greater amount. Because &lt;code&gt;r &amp;gt; g&lt;/code&gt;, you should invest your money for as long as you can before donating&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;–or so the argument goes.&lt;/p&gt;

&lt;p&gt;Traditionally, we’d apply a discount rate of &lt;code&gt;g&lt;/code&gt; to future donations, because that’s the rate at which people get richer and therefore the rate at which money becomes less valuable for them. But this ignores some important factors that affect how much we should discount future donations, and we can create a much more detailed estimate. This essay will explore that in detail. Exactly what factors determine the investment rate of return and the discount rate on poverty alleviation? Can we gain any information about which is likely greater?&lt;/p&gt;

&lt;!-- more --&gt;

&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#basic-formulas-for-the-investment-rate-of-return-and-the-discount-rate-on-poverty-alleviation&quot; id=&quot;markdown-toc-basic-formulas-for-the-investment-rate-of-return-and-the-discount-rate-on-poverty-alleviation&quot;&gt;Basic formulas for the investment rate of return and the discount rate on poverty alleviation&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#why-this-formula-for-r&quot; id=&quot;markdown-toc-why-this-formula-for-r&quot;&gt;Why this formula for &lt;code&gt;r&lt;/code&gt;?&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#do-asset-prices-really-grow-with-gdp&quot; id=&quot;markdown-toc-do-asset-prices-really-grow-with-gdp&quot;&gt;Do asset prices really grow with GDP?&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#why-apply-an-exponential-discount-rate-to-global-poverty-alleviation&quot; id=&quot;markdown-toc-why-apply-an-exponential-discount-rate-to-global-poverty-alleviation&quot;&gt;Why apply an exponential discount rate to global poverty alleviation?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#why-is-the-discount-rate-on-cash-transfers-given-by-g--q&quot; id=&quot;markdown-toc-why-is-the-discount-rate-on-cash-transfers-given-by-g--q&quot;&gt;Why is the discount rate on cash transfers given by &lt;code&gt;g + q&lt;/code&gt;?&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#should-we-consider-the-relative-income-growth-of-the-top-1&quot; id=&quot;markdown-toc-should-we-consider-the-relative-income-growth-of-the-top-1&quot;&gt;Should we consider the relative income growth of the top 1%?&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#giving-opportunities-may-disappear-faster-than-g--q&quot; id=&quot;markdown-toc-giving-opportunities-may-disappear-faster-than-g--q&quot;&gt;Giving opportunities may disappear faster than &lt;code&gt;g + q&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#input-variables-and-what-values-they-might-take-on&quot; id=&quot;markdown-toc-input-variables-and-what-values-they-might-take-on&quot;&gt;Input variables, and what values they might take on&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#investment-rate-of-return-excluding-α&quot; id=&quot;markdown-toc-investment-rate-of-return-excluding-α&quot;&gt;Investment rate of return, excluding α&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#how-big-is-α&quot; id=&quot;markdown-toc-how-big-is-α&quot;&gt;How big is α?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#is-the-inequality-reduction-rate-positive-or-negative&quot; id=&quot;markdown-toc-is-the-inequality-reduction-rate-positive-or-negative&quot;&gt;Is the inequality reduction rate positive or negative?&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#historical-evidence-on-global-income-inequality&quot; id=&quot;markdown-toc-historical-evidence-on-global-income-inequality&quot;&gt;Historical evidence on global income inequality&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#the-past-does-not-predict-the-future&quot; id=&quot;markdown-toc-the-past-does-not-predict-the-future&quot;&gt;The past does not predict the future&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#how-quickly-do-giving-opportunities-disappear&quot; id=&quot;markdown-toc-how-quickly-do-giving-opportunities-disappear&quot;&gt;How quickly do giving opportunities disappear?&lt;/a&gt;        &lt;ul&gt;
          &lt;li&gt;&lt;a href=&quot;#by-when-should-we-expect-the-best-interventions-to-converge-with-cash-transfers&quot; id=&quot;markdown-toc-by-when-should-we-expect-the-best-interventions-to-converge-with-cash-transfers&quot;&gt;By when should we expect the best interventions to converge with cash transfers?&lt;/a&gt;&lt;/li&gt;
          &lt;li&gt;&lt;a href=&quot;#could-giving-opportunities-get-better-over-time&quot; id=&quot;markdown-toc-could-giving-opportunities-get-better-over-time&quot;&gt;Could giving opportunities get better over time?&lt;/a&gt;&lt;/li&gt;
        &lt;/ul&gt;
      &lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#tentative-results&quot; id=&quot;markdown-toc-tentative-results&quot;&gt;Tentative Results&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#a-probabilistic-answer&quot; id=&quot;markdown-toc-a-probabilistic-answer&quot;&gt;A probabilistic answer&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#estimation-tool&quot; id=&quot;markdown-toc-estimation-tool&quot;&gt;Estimation tool&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#if-global-inequality-is-rising-that-might-mean-you-should-donate-later&quot; id=&quot;markdown-toc-if-global-inequality-is-rising-that-might-mean-you-should-donate-later&quot;&gt;If global inequality is rising, that might mean you should donate later&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#the-conclusion-depends-on-α-which-defies-attempts-to-learn-its-value&quot; id=&quot;markdown-toc-the-conclusion-depends-on-α-which-defies-attempts-to-learn-its-value&quot;&gt;The conclusion depends on α, which defies attempts to learn its value&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#should-we-disprefer-investing-in-risky-assets&quot; id=&quot;markdown-toc-should-we-disprefer-investing-in-risky-assets&quot;&gt;Should we disprefer investing in risky assets?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#what-if-you-use-leverage&quot; id=&quot;markdown-toc-what-if-you-use-leverage&quot;&gt;What if you use leverage?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#extra-details&quot; id=&quot;markdown-toc-extra-details&quot;&gt;Extra Details&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#qualitative-factors-may-tilt-the-scales-toward-giving-now&quot; id=&quot;markdown-toc-qualitative-factors-may-tilt-the-scales-toward-giving-now&quot;&gt;Qualitative factors may tilt the scales toward giving now&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#there-are-three-main-ways-of-giving-later-all-of-which-have-problems&quot; id=&quot;markdown-toc-there-are-three-main-ways-of-giving-later-all-of-which-have-problems&quot;&gt;There are three main ways of giving later, all of which have problems&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#investing-looks-particularly-bad-right-now-but-only-under-certain-assumptions&quot; id=&quot;markdown-toc-investing-looks-particularly-bad-right-now-but-only-under-certain-assumptions&quot;&gt;Investing looks particularly bad right now, but only under certain assumptions&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#should-you-give-later-if-you-expect-investment-returns-to-improve&quot; id=&quot;markdown-toc-should-you-give-later-if-you-expect-investment-returns-to-improve&quot;&gt;Should you give later if you expect investment returns to improve?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#what-if-donations-produce-compounding-returns&quot; id=&quot;markdown-toc-what-if-donations-produce-compounding-returns&quot;&gt;What if donations produce compounding returns?&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#existential-risk-makes-giving-later-look-worse&quot; id=&quot;markdown-toc-existential-risk-makes-giving-later-look-worse&quot;&gt;Existential risk makes giving later look worse&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#if-the-best-global-health-interventions-are-getting-used-up-aka-d--0-why-hasnt-the-bottom-decile-experienced-greater-income-growth&quot; id=&quot;markdown-toc-if-the-best-global-health-interventions-are-getting-used-up-aka-d--0-why-hasnt-the-bottom-decile-experienced-greater-income-growth&quot;&gt;If the best global health interventions are getting used up (a.k.a. &lt;code&gt;d &amp;gt; 0&lt;/code&gt;), why hasn’t the bottom decile experienced greater income growth?&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#concluding-remarks&quot; id=&quot;markdown-toc-concluding-remarks&quot;&gt;Concluding Remarks&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#notes&quot; id=&quot;markdown-toc-notes&quot;&gt;Notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What this essay is not:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;A look at giving now vs. later for any cause area other than global poverty. other cause areas have different considerations, and including them would make this essay too long. I plan to write a future essay on giving now vs. later for existential risk.&lt;/li&gt;
  &lt;li&gt;A consideration of personal or social factors on giving now vs. later. This essay only examines the investment rate of return versus the discount rate on global poverty donations. (I would have used that last sentence as the title, but it’s too long.) For some broader considerations on giving now vs. later, see Julia Wise’s &lt;a href=&quot;effective-altruism.com/ea/4e/giving_now_vs_later_a_summary/&quot;&gt;Giving Now vs. Later: A Summary&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem under discussion–comparing the investment rate of return and the discount rate on poverty alleviation–raises a great many questions, and these questions raise their own questions, branching out into an ever-growing tree. I do not claim to address every relevant consideration; I merely consider this essay one possible take on the problem. I have considered many counterarguments to various claims this essay makes, but left out many of them for the sake of (relative) brevity.&lt;/p&gt;

&lt;h1 id=&quot;basic-formulas-for-the-investment-rate-of-return-and-the-discount-rate-on-poverty-alleviation&quot;&gt;Basic formulas for the investment rate of return and the discount rate on poverty alleviation&lt;/h1&gt;

&lt;p&gt;This section provides formulas for the investment rate of return and the discount rate on poverty alleviation, along with the reasoning for selecting those particular formulas.&lt;/p&gt;

&lt;p&gt;We can write the investment rate of return as:&lt;/p&gt;

&lt;div align=&quot;center&quot;&gt;`r = g + i + v + &amp;alpha;`&lt;/div&gt;

&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code&gt;g&lt;/code&gt; = GDP growth&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;i&lt;/code&gt; = investment income (dividends, rents, etc.)&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;v&lt;/code&gt; = valuation&lt;/li&gt;
  &lt;li&gt;
    &lt;div&gt;`&amp;alpha;` (alpha) = extra return on top of the market rate&lt;/div&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A formula for the discount rate on poverty alleviation:&lt;/p&gt;

&lt;div align=&quot;center&quot;&gt;`p = g + q + d`&lt;/div&gt;

&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code&gt;g&lt;/code&gt; = GDP growth&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;q&lt;/code&gt; = income inequality reduction rate, a.k.a. the growth rate of the global poor relative to world GDP growth (note that this could be negative)&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;d&lt;/code&gt; = rate at which the best giving opportunities dry up over time, on top of &lt;code&gt;q&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In plain English, &lt;code&gt;g + q&lt;/code&gt; represents the income growth rate of the poorest people in the world. We can break this down into world GDP growth plus the additional growth (or contraction) that the global poor experience. The discount rate on unconditional cash transfers is given by &lt;code&gt;g + q&lt;/code&gt;, for reasons discussed &lt;a href=&quot;#why-is-the-discount-rate-on-cash-transfers-given-by-g--q&quot;&gt;below&lt;/a&gt;. If we can find more cost-effective giving opportunities than cash transfers, we should discount them at rate &lt;code&gt;p = g + q + d&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;why-this-formula-for-r&quot;&gt;Why this formula for &lt;code&gt;r&lt;/code&gt;?&lt;/h2&gt;

&lt;p&gt;The simple version of the formula for return on capital is &lt;code&gt;r = g + i&lt;/code&gt;. In the long run, asset prices increase at the same rate as GDP (&lt;code&gt;g&lt;/code&gt;), and the assets pay out income (&lt;code&gt;i&lt;/code&gt;), so the total return is asset price growth (=GDP growth) plus income.&lt;/p&gt;

&lt;p&gt;This must hold in the long run (as explained below), but in the short run, the market can drift above or below its intrinsic valuation. The &lt;code&gt;v&lt;/code&gt; term accounts for these fluctuations: if the market currently trades below intrinsic value then &lt;code&gt;v&lt;/code&gt; will be positive, and vice versa.&lt;/p&gt;

&lt;div&gt;`&amp;alpha;` allows for the donor to earn a higher return than the market rate. Some donors may reasonably expect to outperform the market and thus use a value for `&amp;alpha;` that&apos;s greater than zero. This possibility is discussed more in the section on `&amp;alpha;`.&lt;/div&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h3 id=&quot;do-asset-prices-really-grow-with-gdp&quot;&gt;Do asset prices really grow with GDP?&lt;/h3&gt;

&lt;p&gt;The current price of a capital-generating asset (such as a stock) represents the market’s prediction of the discounted present value of all future income for that asset. The investment income &lt;code&gt;i&lt;/code&gt; grows approximately with GDP&lt;sup id=&quot;fnref:9&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:9&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, so the discounted value of all future income payments must grow at the same pace.&lt;/p&gt;

&lt;p&gt;Consider what would happen otherwise, if a stock’s dividend yield permanently increased faster than its stock price. Eventually the stock would pay out some absurdly high dividend like 100% (i.e., you can double your money in a year by buying the stock). If a stock’s dividend yield gets too high, investors will rush in to buy it, thus driving up the price.&lt;/p&gt;

&lt;p&gt;But that’s just an argument, and you can prove anything with arguments. So let’s use an appeal to authority instead. Warren Buffett said that the ratio of market capitalization to GDP &lt;a href=&quot;http://archive.fortune.com/magazines/fortune/fortune_archive/2001/12/10/314691/index.htm&quot;&gt;“is probably the best single measure of where valuations stand at any given moment.”&lt;/a&gt; In other words, he believes that the market’s intrinsic valuation is tied to GDP. If it goes much higher or lower then it’s probably over- or under-valued.&lt;/p&gt;

&lt;p&gt;Note that over short time periods, asset prices need not grow with GDP. They may become detached from their intrinsic value, but they will eventually mean revert. The &lt;code&gt;v&lt;/code&gt; term in the return-on-investment formula accounts for this. For example, the United States greatly deviated from intrinsic value in the 2000 dot-com bubble, so the rate of return in 2000 would have included a large negative value for &lt;code&gt;v&lt;/code&gt; (when valuations are high, future return expectations are correspondingly low).&lt;/p&gt;

&lt;h2 id=&quot;why-apply-an-exponential-discount-rate-to-global-poverty-alleviation&quot;&gt;Why apply an exponential discount rate to global poverty alleviation?&lt;/h2&gt;

&lt;p&gt;An exponential discount rate &lt;code&gt;p&lt;/code&gt; relies on the core assumption that increasing someone’s income by a certain percentage does the same amount of good no matter their absolute income level–if someone’s twice as rich, you need to give them twice as much money to produce the same increase in welfare. In other words, utility is logarithmic with income. This assumption probably isn’t strictly true, but it’s pretty close. It fits the (admittedly limited) empirical data&lt;sup id=&quot;fnref:4&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:4&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;, and it’s built in to a lot of common assumptions we make.&lt;/p&gt;

&lt;p&gt;Consider what the world would be like if utility were not logarithmic with income.&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Suppose utility is linear with income. That means there’s no reason to donate to charity because rich people get just as much value out of each extra dollar as poor people do. In fact you should really donate your money to hedge fund managers because they can use your money to make even more money for themselves.&lt;/li&gt;
  &lt;li&gt;Or maybe, as they say, “Money cannot buy happiness.” In that case you might as well burn all your money because it won’t make you or anyone else happy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Clearly our reality looks nothing like the linear-utility or the zero-utility world. On the other hand, people’s behavior generally makes sense if money provides logarithmic utility.&lt;/p&gt;

&lt;p&gt;If we want to get a little more into the weeds, we could argue about whether we should care about log wealth instead of log income. That’s a big question, and it probably doesn’t have much bearing on our ultimate decisions so I’m not going to get into it. For our purposes, we can consider income and wealth roughly equivalent. Similarly, I conflate income and consumption; these are sufficiently similar that conflating them is a broadly accepted practice.&lt;/p&gt;

&lt;h2 id=&quot;why-is-the-discount-rate-on-cash-transfers-given-by-g--q&quot;&gt;Why is the discount rate on cash transfers given by &lt;code&gt;g + q&lt;/code&gt;?&lt;/h2&gt;

&lt;p&gt;To start, let’s use &lt;a href=&quot;https://givedirectly.org/&quot;&gt;GiveDirectly&lt;/a&gt; as our model charity. It gives cash directly to the global poor. Some charities may do more good per dollar than GiveDirectly, and we will discuss that possibility; but probably no global poverty charity does &lt;em&gt;much&lt;/em&gt; better, so GiveDirectly serves as a reasonable baseline.&lt;/p&gt;

&lt;p&gt;For GiveDirectly, &lt;code&gt;d = 0&lt;/code&gt; in our formula &lt;code&gt;p = g + q + d&lt;/code&gt;, so for now we can simplify it to &lt;code&gt;p = g + q&lt;/code&gt;: we discount future donations to GiveDirectly according to the income growth rate of the poorest people in the world.&lt;/p&gt;

&lt;p&gt;If we assume that utility scale logarithmically with income, the good we can do with a cash transfer depends on the ratio of the size of the donation to the income of the recipient. The return on investment &lt;code&gt;r&lt;/code&gt; determines how much more you can donate in the future if you invest your money. While you’re holding your investments, the world’s poorest people get richer at rate &lt;code&gt;g + q&lt;/code&gt;: the global GDP growth plus the rate of inequality reduction (or inequality increase, if &lt;code&gt;q&lt;/code&gt; is negative). If the global poor are gaining wealth more quickly, that means your future donations won’t matter as much to them; and vice versa.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;g + q&lt;/code&gt; is what’s called the “anonymous” income growth rate. It doesn’t tell us the rate of income growth for a particular poor person; instead, it tells us the rate at which the poorest (say) 10% of people get richer, even if the poorest 10% at one time are different people from the poorest 10% at another time. For example, in the 80’s and 90’s, most of the people in the bottom decile of income lived in India and China; but today they’re mainly in sub-Saharan Africa&lt;sup id=&quot;fnref:5&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;. That means the poorest Chinese experienced income growth greater than &lt;code&gt;g + q&lt;/code&gt;, while the poorest Africans did relatively worse.&lt;/p&gt;

&lt;p&gt;We care specifically about anonymous income growth (&lt;code&gt;g + q&lt;/code&gt;) rather than non-anonymous growth (e.g., the growth rate of the poorest people in China in particular) because giving opportunities scale with &lt;code&gt;g + q&lt;/code&gt;. Currently, GiveDirectly primarily serves people in Kenya; but if poor Kenyans move up on the income scale and thus cash transfers become less effective for them, GiveDirectly can go find the new poorest people and give cash transfers to them instead.&lt;/p&gt;

&lt;h3 id=&quot;should-we-consider-the-relative-income-growth-of-the-top-1&quot;&gt;Should we consider the relative income growth of the top 1%?&lt;/h3&gt;

&lt;p&gt;Given that most of my readers are in the global top 1%, and many are in the top 0.1%, we might think it pertinent to consider the rate of income growth among the richest people in the world–higher relative growth means we can expect the size of our donations to grow more quickly than world GDP and thus do more good in the future. But this isn’t particularly relevant. If you save money to donate later, you cannot invest that cash directly into your own income; you can only invest it in, well, investments.&lt;/p&gt;

&lt;p&gt;That said, you may at some point in your life have a choice between donating money now or spending money to benefit your career growth, in which case you do care about how quickly you expect your salary to increase. You can probably get much better information about projected salary growth by looking at statistics for your field and examining your particular situation than you can by looking at global income growth statistics. I won’t consider this in detail, but if you want, you can consider investing in personal development as a form of investment and re-calculate &lt;code&gt;r&lt;/code&gt; based on this.&lt;/p&gt;

&lt;h2 id=&quot;giving-opportunities-may-disappear-faster-than-g--q&quot;&gt;Giving opportunities may disappear faster than &lt;code&gt;g + q&lt;/code&gt;&lt;/h2&gt;

&lt;p&gt;To quote &lt;a href=&quot;http://slatestarcodex.com/2013/04/05/investment-and-inefficient-charity/&quot;&gt;Scott Alexander&lt;/a&gt;, paraphrasing Elie Hassenfeld: “in the 1960s, the most cost-effective charity was childhood vaccinations, but now so many people have donated to this cause that 80% of children are vaccinated… In the 1960s, iodizing salt might have been the highest-utility intervention, but now most of the low-iodine areas have been identified and corrected. While there is still much to be done, we have run out of interventions quite as easy and cost-effective as those. And one day, God willing, we will end malaria and maybe we will never see a charity as effective as the Against Malaria [Foundation] again.”&lt;/p&gt;

&lt;p&gt;The world’s poorest people may experience improvements to their lives more quickly than their income grows. Or it may become more difficult to find people who we can easily help. For whatever reason, cost-effective giving opportunities may disappear faster than &lt;code&gt;g + q&lt;/code&gt;, so we should apply an extra discount rate &lt;code&gt;d&lt;/code&gt;.&lt;/p&gt;

&lt;h1 id=&quot;input-variables-and-what-values-they-might-take-on&quot;&gt;Input variables, and what values they might take on&lt;/h1&gt;

&lt;p&gt;Recall our two formulas:&lt;/p&gt;

&lt;div align=&quot;center&quot;&gt;`r = g + i + v + &amp;alpha;`&lt;/div&gt;

&lt;div align=&quot;center&quot;&gt;`p = g + q + d`&lt;/div&gt;

&lt;p&gt;Let’s consider each term in these formulas and discuss what values it might reasonably take on.&lt;/p&gt;

&lt;h2 id=&quot;investment-rate-of-return-excluding-α&quot;&gt;Investment rate of return, excluding α&lt;/h2&gt;

&lt;p&gt;The global stock market has experienced a long-run real return of about 5%&lt;sup id=&quot;fnref:10&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;. A lot of people quote 7% as the long-run return, and it’s true that the US stock market has returned 7% over the past 100 years or so. But the United States experienced stronger growth than any other country in the past century and we &lt;a href=&quot;https://mdickens.me/2017/03/26/do_investors_put_too_much_stock_in_the_us/&quot;&gt;should not&lt;/a&gt; necessarily expect that trend to continue. So I will use the global 5% figure. This 5% can be broken down into approximately 2% GDP growth (see &lt;a href=&quot;https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG&quot;&gt;World Bank data&lt;/a&gt;) and 3% investment income.&lt;/p&gt;

&lt;p&gt;Over time periods of about a decade, the market predictably deviates from the long-run return. The &lt;a href=&quot;https://en.wikipedia.org/wiki/Dividend_discount_model&quot;&gt;dividend discount model&lt;/a&gt; can be used to predict future market returns. Although the market fluctuates a lot from year to year for various reasons, the dividend discount model has reasonably strong predictive power over a 10-year timespan. Valuation metrics such as the 10-year CAPE ratio have &lt;a href=&quot;http://mebfaber.com/wp-content/uploads/2016/05/SSRN-id2129474.pdf&quot;&gt;predictive power&lt;/a&gt; as well. We can use these 10-year projections to decide whether to give now or later and then re-assess the projections every few years and change our policy accordingly.&lt;/p&gt;

&lt;p&gt;The financial firm Research Affiliates produces &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation&quot;&gt;expected 10-year returns&lt;/a&gt; for a variety of asset classes using similar methodology to what I described. They predict future stock returns as a combination of capital growth (same as my &lt;code&gt;g&lt;/code&gt;), average net yield (&lt;code&gt;i&lt;/code&gt;), and valuation change (&lt;code&gt;v&lt;/code&gt;). As of when I’m writing this, Research Affiliates predicts &lt;code&gt;g = 1.3%&lt;/code&gt;, &lt;code&gt;i = 2.4%&lt;/code&gt;, and &lt;code&gt;v = -1.2%&lt;/code&gt; for an aggregate real return of &lt;code&gt;r = 2.6%&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Research Affiliates gives some assets a better 10-year expected return than stocks. &lt;a href=&quot;https://www.msci.com/eafe&quot;&gt;EAFE&lt;/a&gt; (that is, the developed-country markets in Europe, Australia, and the Far East) has a projected 10-year average real return of 4.7%, and the firm predicts emerging markets to return 6.8%. You could certainly argue that potential donors ought to invest their funds in emerging markets to get higher returns (I personally put a sizable portion of my investments into emerging markets). But we can take the world stock market as the default investment.&lt;/p&gt;

&lt;h2 id=&quot;how-big-is-α&quot;&gt;How big is α?&lt;/h2&gt;

&lt;p&gt;`&amp;alpha;` tells us the margin by which we can expect to outperform the market. Conservatively, we can assume that `&amp;alpha; = 0`, but it might plausibly be greater than 0. How much greater?&lt;/p&gt;

&lt;p&gt;Thomas Piketty claims&lt;sup id=&quot;fnref:11&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; that the richest 0.1% or so can earn 2-3 percentage points better returns than the market, primarily because they have access to more investment opportunities and they can hire the best money managers. As a baseline, wealthy, sophisticated investors can perhaps expect to earn an extra 2-3% return.&lt;/p&gt;

&lt;p&gt;Financial research from Meb Faber&lt;sup id=&quot;fnref:10:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:10&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; as well as from Gray, Vogel, &amp;amp; Foulke&lt;sup id=&quot;fnref:12&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:12&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; show that historically, investors could have returned 4-6% alpha on top of the market return (after expenses). Market-beating strategies tend to disappear as they are discovered, but Gray et al. present reasons why they expect this not to happen (see also &lt;a href=&quot;https://www.youtube.com/watch?v=rVjlXzWwxXk&amp;amp;t=21m18s&quot;&gt;this talk by Wesley Gray&lt;/a&gt; (the relevant portion starts at 21:18)).&lt;/p&gt;

&lt;p&gt;My own independent backtests (unpublished, based on CRSP/Compustat price and fundamentals data) suggest that small investors, by following strategies outlined by Gray et al. but buying smaller stocks, historically could have increased their alpha to about 8-10% after expenses. (Large investors cannot take advantage of this because they can’t put too much money into small stocks without moving the market.) Going forward, I expect these strategies to work less well, although probably still better than the broad market, for reasons discussed by Gray.&lt;/p&gt;

&lt;p&gt;If you can identify the smartest people in the room (such as Warren Buffett in the 50’s, or &lt;a href=&quot;https://en.wikipedia.org/wiki/James_Harris_Simons&quot;&gt;James Simons&lt;/a&gt; or &lt;a href=&quot;https://en.wikipedia.org/wiki/Joel_Greenblatt&quot;&gt;Joel Greenblatt&lt;/a&gt; in the 90’s) and get them to invest your money, you may be able to achieve even better returns. The tricky part is identifying those people in advance and being in a position where they are willing to take your money. It’s probably safe to assume that you can’t do that, but if you can, then that’s a strong consideration in favor of saving your money to donate later. Warren Buffett is almost certainly doing more good by donating $80 billion today than if he had donated a few million dollars in the 60’s.&lt;/p&gt;

&lt;h2 id=&quot;is-the-inequality-reduction-rate-positive-or-negative&quot;&gt;Is the inequality reduction rate positive or negative?&lt;/h2&gt;

&lt;p&gt;In our formula for global poverty alleviation, &lt;code&gt;p = g + q&lt;/code&gt;, the &lt;code&gt;q&lt;/code&gt; represents the inequality reduction rate: the rate at which the poorest people’s income grows faster than world GDP (&lt;code&gt;g&lt;/code&gt;)&lt;sup id=&quot;fnref:8&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:8&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;. (Note that &lt;code&gt;q&lt;/code&gt; could be negative, i.e., global income inequality could be rising.) What is the value of &lt;code&gt;q&lt;/code&gt;?&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.givewell.org/charities/give-directly&quot;&gt;According to GiveWell&lt;/a&gt;, GiveDirectly recipients on average spend about $0.60 per day in nominal terms&lt;sup id=&quot;fnref:14&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:14&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;, which corresponds to about $1.20 in purchasing power (see &lt;a href=&quot;https://data.worldbank.org/indicator/PA.NUS.PPPC.RF?view=chart&quot;&gt;World Bank PPP data&lt;/a&gt;). This puts GiveDirectly recipients in the bottom 10% of the world by income, and sometimes but not always in the bottom 5%. So we want to know what income growth looks like for the bottom 5-10% and compare it to growth for the top 1%.&lt;/p&gt;

&lt;h3 id=&quot;historical-evidence-on-global-income-inequality&quot;&gt;Historical evidence on global income inequality&lt;/h3&gt;

&lt;p&gt;We can look at historical economic growth among the global poor to see how global income inequality has changed over time. Some sources, discussed below, provide data going back about 200 years, but only cover developed countries. We have more modern data covering a larger global population, including those in extreme poverty, but unfortunately it only goes back to about 1990. Let’s see what we can find out, anyway.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.brookings.edu/wp-content/uploads/2018/04/workingpaper114-elephantchartrevisited.pdf&quot;&gt;Kharas &amp;amp; Seidel (2018)&lt;/a&gt;&lt;sup id=&quot;fnref:5:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:5&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; examine survey data 1993-2013 and look at the income growth for each global income percentile. They present their data in this graph and compare with income tax data from the World Wealth and Income Database (WID), taken from page 12 of the report:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/assets/images/brookings-elephant.png&quot; width=&quot;700px&quot; /&gt;&lt;/p&gt;

&lt;p&gt;I include the graph with both data sources because Kharas &amp;amp; Seidel emphasize how each data source (surveys vs. tax records) has its own biases, and neither should be taken literally. Kharas &amp;amp; Seidel also discuss a number of possible methodological alterations that change how income growth graph looks.&lt;/p&gt;

&lt;p&gt;Caveats aside, these data suggest that the world’s poorest decile experienced income growth at about the same rate as global economic growth, or maybe a little lower. This implies a &lt;code&gt;q&lt;/code&gt; roughly between 0% and -0.5% based on Kharas &amp;amp; Seidel and World Bank data.&lt;/p&gt;

&lt;p&gt;As I mentioned previously, most of the income growth in the past few decades among the world’s poorest happened in China and India, with sub-Saharan Africa getting left behind. Perhaps the global extreme poor will experience worse growth over the next 20-30 years now that the people in the rapidly-growing Asian countries have moved out of the bottom decile.&lt;/p&gt;

&lt;p&gt;The aforementioned &lt;a href=&quot;https://wid.world/&quot;&gt;World Wealth and Income Database&lt;/a&gt; presents &lt;a href=&quot;https://wid.world/data/&quot;&gt;data&lt;/a&gt; on a wide variety of metrics. We can use their website to look at income growth for sub-Saharan Africa, the poorest region in the world, from 1980 to 2016. The whole region experienced an annual 0.9% increase in PPP-adjusted national income per capita, while the poorest 50% (which roughly corresponds to the bottom decile globally) saw 1.6% growth over the period. Or if we look at 1993-2013 to allow a direct comparison to the Kharas &amp;amp; Seidel data, we see 1.7% overall growth as well as 2.4% growth for the poorest 50%. These results suggest a historical &lt;code&gt;q&lt;/code&gt; of around -0.5% to -1% for sub-Saharan Africa.&lt;/p&gt;

&lt;h3 id=&quot;the-past-does-not-predict-the-future&quot;&gt;The past does not predict the future&lt;/h3&gt;

&lt;p&gt;In &lt;em&gt;Capital in the Twenty-First Century&lt;/em&gt;&lt;sup id=&quot;fnref:11:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:11&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;6&lt;/a&gt;&lt;/sup&gt;, Thomas Piketty presents some data on longer term trends on income growth going back about 200 years. These data cannot tell us much about extreme poverty because they only cover the most developed countries, but we can draw one key lesson. Income inequality increased from about 1870 to 1910, then decreased 1910 to 1950, and has been increasing again since then. This shows that income inequality can go through long periods of rising or falling, so the past 25 years in which we’ve had good global data probably don’t tell us much about what we can expect going forward.&lt;/p&gt;

&lt;p&gt;That said, to my knowledge, the recent historical &lt;code&gt;q&lt;/code&gt; of around 0% to -1% will plausibly continue over the next decades. Given the amount of research done in this area by development economists, we could probably develop a better estimate of &lt;code&gt;q&lt;/code&gt;, but this level of inquiry suffices for now.&lt;/p&gt;

&lt;h2 id=&quot;how-quickly-do-giving-opportunities-disappear&quot;&gt;How quickly do giving opportunities disappear?&lt;/h2&gt;

&lt;p&gt;We discussed above why &lt;code&gt;g + q&lt;/code&gt; determines the discount rate on cash transfers. We can always continue giving poor people money (at least in theory), so the best interventions must always be at least as good as cash transfers. Thismeans cash transfers get less cost-effective at the rate &lt;code&gt;g + q&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;But some interventions may do more good per dollar than direct cash transfers–such as the best global health interventions. (GiveWell &lt;a href=&quot;https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models&quot;&gt;thinks so&lt;/a&gt;.) In that case, even if cash transfers can absorb arbitrary amounts of funding, the better interventions might still dry up over time. If these better interventions do exist and they’ve been disappearing, that raises the question of why we don’t see this in global income statistics: why isn’t the bottom decile getting richer much faster than the global median?&lt;/p&gt;

&lt;p&gt;There exist three possibilities:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The best global health interventions do not actually do much more good than cash transfers.&lt;/li&gt;
  &lt;li&gt;The best global health interventions increase people’s income more so than cash transfers, but they aren’t well-funded enough to show up in aggregate global income trends.&lt;/li&gt;
  &lt;li&gt;Global health interventions do more good than cash transfers, but don’t produce a substantial increase in income.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In case 1, direct cash transfers do about as much good as anything else, so how much good we can do depends on how much poorer the poorest people are relative to us, the donors. In other words, giving opportunities dry up at about rate &lt;code&gt;q&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Similarly, in case 2, the best giving opportunities are not being substantially funded, so giving opportunities will not disappear much quicker than the rate of income inequality reduction.&lt;/p&gt;

&lt;p&gt;That leaves case 3, in which the best interventions actually do beat cash transfers, but don’t show up in &lt;code&gt;q&lt;/code&gt;. In this case, top global health interventions will receive more and more funding over time&lt;sup id=&quot;fnref:6&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:6&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;10&lt;/a&gt;&lt;/sup&gt;, until their cost-effectiveness converges with cash transfers’. How quickly this happens depends on (1) the current differential between cash transfers and the top intervention and (2) the time until convergence.&lt;/p&gt;

&lt;p&gt;Based on GiveWell’s &lt;a href=&quot;https://docs.google.com/spreadsheets/d/1XIBtD1aOKKkVKl9eFB_L5OF9zobONtmbg4bIXvKI86k/edit#gid=1034883018&quot;&gt;2018 cost-effectiveness analysis&lt;/a&gt;, it’s reasonable to assume that the current best global health interventions are about 5 times more cost-effective than cash transfers&lt;sup id=&quot;fnref:7&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:7&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;. If better-than-cash-transfer interventions take a given number of years to dry up, what marginal discount rate &lt;code&gt;d&lt;/code&gt; should we apply on top of rate &lt;code&gt;p&lt;/code&gt;?&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;years&lt;/th&gt;
      &lt;th&gt;rate&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;20&lt;/td&gt;
      &lt;td&gt;8.4%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;50&lt;/td&gt;
      &lt;td&gt;3.3%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;100&lt;/td&gt;
      &lt;td&gt;1.6%&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;150&lt;/td&gt;
      &lt;td&gt;1.1%&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Or calculate for your own inputs:&lt;/p&gt;

&lt;script type=&quot;text/javascript&quot;&gt;
function calcDiscount() {
    factorStr = document.healthDiscounter.factor.value;
    factor = parseInt(factorStr);
    yearsStr = document.healthDiscounter.years.value;
    years = parseInt(yearsStr);
    rate = (factor**(1.0/years) - 1)*100;
    document.getElementById(&quot;rate&quot;).innerHTML = rate.toFixed(1).toString() + &quot;%&quot;;
}
&lt;/script&gt;

&lt;form name=&quot;healthDiscounter&quot;&gt;
&lt;table&gt;
  &lt;tr&gt;&lt;td align=&quot;right&quot;&gt;cost-effectiveness of top intervention over cash transfers&lt;/td&gt; &lt;td&gt;&lt;input type=&quot;number&quot; name=&quot;factor&quot; value=&quot;5&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;
  &lt;tr&gt;&lt;td align=&quot;right&quot;&gt;years until convergence&lt;/td&gt; &lt;td&gt;&lt;input type=&quot;number&quot; name=&quot;years&quot; value=&quot;50&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;
  &lt;tr&gt;&lt;td align=&quot;right&quot;&gt;discount rate&lt;/td&gt; &lt;td&gt;&lt;strong&gt;&lt;div id=&quot;rate&quot;&gt;&lt;/div&gt;&lt;/strong&gt;&lt;/td&gt;&lt;/tr&gt;
  &lt;tr&gt;&lt;td&gt;&lt;input type=&quot;button&quot; class=&quot;button&quot; name=&quot;button&quot; value=&quot;Calculate&quot; onclick=&quot;calcDiscount()&quot; /&gt;&lt;/td&gt; &lt;td&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;
&lt;/form&gt;

&lt;p&gt;(Clearly I don’t have great design skills but it gets the job done.)&lt;/p&gt;

&lt;p&gt;Even over somewhat long time horizons (&amp;gt;50 years), this rate non-trivially increases the value of &lt;code&gt;p&lt;/code&gt;. Even if giving later beats giving now for GiveDirectly, it is plausible that we should prefer to give now to GiveWell’s other top charities that have higher estimated cost-effectiveness.&lt;/p&gt;

&lt;h3 id=&quot;by-when-should-we-expect-the-best-interventions-to-converge-with-cash-transfers&quot;&gt;By when should we expect the best interventions to converge with cash transfers?&lt;/h3&gt;

&lt;p&gt;I don’t have anything definitive to say about this, and more research would likely prove fruitful. That said, I have some observations, as follows.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The industrial revolution began over 200 years ago, and we still have not come close to eradicating global poverty, so it would not be surprising if it took until 2100 or longer to end poverty.&lt;/li&gt;
  &lt;li&gt;The number of people living in extreme poverty has roughly halved since 1990 (see &lt;a href=&quot;https://ourworldindata.org/extreme-poverty&quot;&gt;Global Extreme Poverty&lt;/a&gt; by Our World in Data, which contains lots of well-presented data). That said:
    &lt;ul&gt;
      &lt;li&gt;The international poverty line is an income of $1.90 per day. Someone making substantially more than that could still easily be considered poor.&lt;/li&gt;
      &lt;li&gt;For a full picture of poverty, we should look at more than just income. The Global Extreme Poverty report above includes a section on &lt;a href=&quot;https://ourworldindata.org/extreme-poverty#poverty-across-multiple-dimensions&quot;&gt;multidimensional poverty&lt;/a&gt;. It would be a relatively straightforward matter to examine data on the multidimensional poverty index (or similar metrics) to better estimate how poverty has been changing over time.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Effective giving opportunities may disappear more quickly than the poorest people’s lives get better, because there are many people living in poverty who nonetheless cannot easily be helped.&lt;/li&gt;
  &lt;li&gt;Looking at specific metrics may overestimate the pace of poverty reduction. For &lt;a href=&quot;https://en.wikipedia.org/wiki/Goodhart%27s_law&quot;&gt;Goodhart’s Law&lt;/a&gt; reasons, particular well-known indicators of well-being, such as child mortality rates, may show dramatic improvement over the course of a few decades while other unmeasured but important problems remain.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;could-giving-opportunities-get-better-over-time&quot;&gt;Could giving opportunities get better over time?&lt;/h3&gt;

&lt;p&gt;(In other words, could &lt;code&gt;d&lt;/code&gt; be negative?) Perhaps in the future, we will discover new ways of doing good that work better than our current best efforts. That’s certainly conceivable. I don’t consider it a strong possibility, but I haven’t seriously considered the arguments in favor. I do believe we have a fair chance of improving on the current opportunities in smaller cause areas such as farm animal welfare, but global poverty is already well-funded and well-researched, so my best guess is that we have already found the best interventions.&lt;/p&gt;

&lt;h1 id=&quot;tentative-results&quot;&gt;Tentative Results&lt;/h1&gt;

&lt;h2 id=&quot;a-probabilistic-answer&quot;&gt;A probabilistic answer&lt;/h2&gt;

&lt;p&gt;A basic quantitative model would take point estimates for each of its inputs and produce a single answer. One could then adjust the point estimates to see a range of plausible answers.&lt;/p&gt;

&lt;p&gt;For the problem under discussion, we can get a better picture by using a probabilistic model. Assume that each input follows a normal distribution (probably a reasonable simplifying assumption&lt;sup id=&quot;fnref:13&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:13&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;12&lt;/a&gt;&lt;/sup&gt;), and provide a mean and standard deviation to define the distribution. Use these inputs to calculate probability distributions for &lt;code&gt;r&lt;/code&gt; and &lt;code&gt;p&lt;/code&gt;, and then calculate the probability that &lt;code&gt;r &amp;gt; p&lt;/code&gt;, or the probability distribution of the difference &lt;code&gt;r - p&lt;/code&gt;. If &lt;code&gt;r - p &amp;gt; 0&lt;/code&gt; that means the model says you should give later, and &lt;code&gt;r - p &amp;lt; 0&lt;/code&gt; means give now. Similarly, &lt;code&gt;P(r &amp;gt; p)&lt;/code&gt; indicates the probability that giving later beats giving now.&lt;/p&gt;

&lt;p&gt;(The math all works out nicely because the sum or difference of two normal distributions is itself a normal distribution where the mean is the sum or difference (respectively) of the two original distributions’ means.)&lt;/p&gt;

&lt;h2 id=&quot;estimation-tool&quot;&gt;Estimation tool&lt;/h2&gt;

&lt;p&gt;I included some default values based on my rough best guesses at the time of this writing, but left the input fields blank to avoid anchoring. Feel free to change the inputs to fit your own best guesses. Press “Show defaults” to populate the inputs with my best guesses, and “Clear” to clear all inputs.&lt;/p&gt;

&lt;div style=&quot;margin-left: 30%&quot;&gt;`r = g + i + v + &amp;alpha;`&lt;/div&gt;
&lt;div style=&quot;margin-left: 30%&quot;&gt;`p = g + q + d`&lt;/div&gt;

&lt;!-- in a separate file so the code doesn&apos;t clutter the essay --&gt;
&lt;script src=&quot;/scripts/giving_now_later.js&quot;&gt;
&lt;/script&gt;

&lt;form name=&quot;nowOrLater&quot;&gt;
&lt;input type=&quot;button&quot; class=&quot;button&quot; value=&quot;Show defaults&quot; onclick=&quot;showDefaults()&quot; /&gt;
&lt;input type=&quot;button&quot; class=&quot;button&quot; value=&quot;Clear&quot; onclick=&quot;clearInputs()&quot; /&gt;
&lt;table&gt;
  &lt;tr&gt;&lt;th&gt;Description&lt;/th&gt;&lt;th&gt;Symbol&lt;/th&gt;&lt;th&gt;Mean&lt;/th&gt;&lt;th&gt;Stdev&lt;/th&gt;&lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;growth&lt;/td&gt;
    &lt;td&gt;g&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;gmean&quot; /&gt;&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;gstdev&quot; /&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;investment income&lt;/td&gt;
    &lt;td&gt;i&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;imean&quot; /&gt;&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;istdev&quot; /&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;valuation&lt;/td&gt;
    &lt;td&gt;v&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;vmean&quot; /&gt;&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;vstdev&quot; /&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;extra return (alpha)&lt;/td&gt;
    &lt;td&gt;a&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;amean&quot; /&gt;&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;astdev&quot; /&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;inequality reduction&lt;/td&gt;
    &lt;td&gt;q&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;qmean&quot; /&gt;&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;qstdev&quot; /&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;extra discount&lt;/td&gt;
    &lt;td&gt;d&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;dmean&quot; /&gt;&lt;/td&gt;
    &lt;td&gt;&lt;input type=&quot;number&quot; step=&quot;0.01&quot; id=&quot;dstdev&quot; /&gt;&lt;/td&gt;
  &lt;/tr&gt;

  &lt;tr&gt;
    &lt;td&gt;return on investment&lt;/td&gt;
    &lt;td&gt;r&lt;/td&gt;
    &lt;td&gt;&lt;div id=&quot;rmean&quot;&gt;&lt;/div&gt;&lt;/td&gt;
    &lt;td&gt;&lt;div id=&quot;rstdev&quot;&gt;&lt;/div&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;discount rate&lt;/td&gt;
    &lt;td&gt;p&lt;/td&gt;
    &lt;td&gt;&lt;div id=&quot;pmean&quot;&gt;&lt;/div&gt;&lt;/td&gt;
    &lt;td&gt;&lt;div id=&quot;pstdev&quot;&gt;&lt;/div&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;difference&lt;/td&gt;
    &lt;td&gt;r - p&lt;/td&gt;
    &lt;td&gt;&lt;b&gt;&lt;div id=&quot;rpmean&quot;&gt;&lt;/div&gt;&lt;/b&gt;&lt;/td&gt;
    &lt;td&gt;&lt;div id=&quot;rpstdev&quot;&gt;&lt;/div&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;P(r &amp;gt; p)&lt;/td&gt;
    &lt;td&gt;&lt;/td&gt;
    &lt;td&gt;&lt;b&gt;&lt;div id=&quot;probRgtP&quot;&gt;&lt;/div&gt;&lt;/b&gt;&lt;/td&gt;
    &lt;td&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;&lt;input type=&quot;button&quot; class=&quot;button&quot; name=&quot;button&quot; value=&quot;Calculate&quot; onclick=&quot;calcNowOrLater()&quot; /&gt;&lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;
&lt;/form&gt;

&lt;ul&gt;
  &lt;li&gt;Excluding alpha, I chose the values for &lt;code&gt;g + i + v&lt;/code&gt; to fit with Research Affiliates’ estimated 2.6% return and 1.7% standard deviation on projected 10-year world stock market returns (as of August 2018).&lt;/li&gt;
  &lt;li&gt;I arrived at alpha = 4% by taking the estimated 6% alpha from the cited sources in the &lt;a href=&quot;#how-big-is-alpha&quot;&gt;section on alpha&lt;/a&gt; and adjusting for the expectation that alpha will be smaller in the future.&lt;/li&gt;
  &lt;li&gt;In the &lt;a href=&quot;#is-the-inequality-reduction-rate-positive-or-negative&quot;&gt;section on q&lt;/a&gt;, I estimated that &lt;code&gt;q&lt;/code&gt; was around 0% to -1%, so I chose -0.5% as the middle value with a standard deviation of 0.5%–one standard deviation in each direction covers my estimated range. This implies a 95% probability that the true value falls between -1.5% and 0.5%, which seems reasonable to me.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;d = 3.3%&lt;/code&gt; comes from the assumptions that (1) the best interventions are currently 5x better than cash transfers and (2) they will take about 50 years to converge.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again, this model does not give the true probability that giving later does more good than giving now. The model only considers the investment rate and the donation discount rate. We can and should consider &lt;a href=&quot;http://effective-altruism.com/ea/4e/giving_now_vs_later_a_summary/&quot;&gt;lots of other factors&lt;/a&gt; that weigh into the decision.&lt;/p&gt;

&lt;h2 id=&quot;if-global-inequality-is-rising-that-might-mean-you-should-donate-later&quot;&gt;If global inequality is rising, that might mean you should donate later&lt;/h2&gt;

&lt;p&gt;Rising income inequality generally is considered a bad thing because it works to the disadvantage of the world’s poorest people. But somewhat counterintuitively, the greater the disparity in income growth, the more likely it is that we should not do anything about it, because we can do more good by waiting until income inequality gets worse.&lt;/p&gt;

&lt;p&gt;This is not to say that we &lt;em&gt;necessarily&lt;/em&gt; should give later if inequality is rising; but the smaller the value of &lt;code&gt;q&lt;/code&gt;, the more the balance tips toward giving later.&lt;/p&gt;

&lt;h2 id=&quot;the-conclusion-depends-on-α-which-defies-attempts-to-learn-its-value&quot;&gt;The conclusion depends on α, which defies attempts to learn its value&lt;/h2&gt;

&lt;div&gt;It’s possible that a huge chunk of the value of `r` comes from `&amp;alpha;`. As hard as it is to determine the value of `g` and `i`, `&amp;alpha;` is even harder.&lt;/div&gt;

&lt;p&gt;We can estimate the historical value of alpha as I did &lt;a href=&quot;#how-big-is-alpha&quot;&gt;above&lt;/a&gt;. But &lt;a href=&quot;https://www.lesswrong.com/posts/h24JGbmweNpWZfBkM/markets-are-anti-inductive&quot;&gt;markets are anti-inductive.&lt;/a&gt; Market anomalies tend to disappear over time, which makes it harder to outperform a broad index.&lt;/p&gt;

&lt;p&gt;As discussed above, we do have some reason to expect certain market inefficiencies to persist. They probably will not work as well in the future as they did in the past, but how less well will they work? Half as well? A quarter as well?&lt;/p&gt;

&lt;p&gt;The value premium has been well publicized at least since Ben Graham and David Dodd’s &lt;a href=&quot;https://en.wikipedia.org/wiki/Security_Analysis_(book)&quot;&gt;Security Analysis&lt;/a&gt; in 1934 and Graham’s &lt;a href=&quot;https://en.wikipedia.org/wiki/The_Intelligent_Investor&quot;&gt;The Intelligent Investor&lt;/a&gt; in 1949. But value investing has become much more popular in recent years, as seen in the growth of value mutual funds and smart beta funds. On the other hand, some evidence suggests that investors are not harvesting the value premium in general. For example, David Blitz’s &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2912287&quot;&gt;Are Exchange-Traded Funds Harvesting Factor Premiums?&lt;/a&gt; found that for every dollar in a value ETF, there’s (approximately) one dollar in an ETF following the opposite strategy.&lt;/p&gt;

&lt;p&gt;It’s plausible that the value premium (and perhaps other sources of alpha) will work just as well in the coming decades as they have in the past. And perhaps they will work far less well. I have no idea.&lt;/p&gt;

&lt;h2 id=&quot;should-we-disprefer-investing-in-risky-assets&quot;&gt;Should we disprefer investing in risky assets?&lt;/h2&gt;

&lt;p&gt;None of the prior discussion considered the nature of risk in investing. Nearly all investors don’t just care about their expected return, but also their risk. What should we care about?&lt;/p&gt;

&lt;p&gt;If we invest to give later, we introduce the possibility that we will lose money on our investments, and end up donating &lt;em&gt;less&lt;/em&gt; than if we had given now.&lt;/p&gt;

&lt;p&gt;Altruists should generally &lt;a href=&quot;(https://reducing-suffering.org/when-should-altruists-be-financially-risk-averse/)&quot;&gt;not be as risk-averse&lt;/a&gt; as people investing their personal money, so we can largely ignore investment risk without losing much. That said, there is certainly room for further analysis on this. Considering risk would make giving later look relatively worse, but it’s hard to say by how much.&lt;/p&gt;

&lt;h2 id=&quot;what-if-you-use-leverage&quot;&gt;What if you use leverage?&lt;/h2&gt;

&lt;p&gt;If you’re investing your money to donate, you might be willing to pursue &lt;a href=&quot;https://reducing-suffering.org/when-should-altruists-be-financially-risk-averse/&quot;&gt;more risky investments&lt;/a&gt; than you would otherwise. For this essay, I have assumed that you invest entirely in stocks and not bonds, which already has greater risk than investors usually take on. But you can take on even more risk via &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/&quot;&gt;leverage&lt;/a&gt;: borrow extra money so that you can invest it. You may be able to substantially increase &lt;code&gt;r&lt;/code&gt; by taking on leverage.&lt;/p&gt;

&lt;p&gt;This naturally raises the question of how much leverage you should use. Brian Tomasik &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/#Optimal_leverage&quot;&gt;discusses this in some detail&lt;/a&gt;. It’s a complicated question, and unfortunately, it introduces a huge amount of uncertainty into what return on investment you might expect.&lt;/p&gt;

&lt;p&gt;Suppose instead of holding stocks with no leverage, we used 2:1 leverage. This would somewhat less than double our expected return. (Tomasik has extensive calculations and simulations on &lt;a href=&quot;https://reducing-suffering.org/should-altruists-leverage-investments/#Theoretical_leverage_performance&quot;&gt;what return we might expect&lt;/a&gt;.) For simplicity, let’s coarsely assume that we make 1.5x greater return by using 2:1 leverage. If we previously used &lt;code&gt;r = 4%&lt;/code&gt;, now we have &lt;code&gt;r = 6%&lt;/code&gt;, which could easily tip the balance from donating to saving. If we previously used a more aggressive &lt;code&gt;r = 8%&lt;/code&gt;, leverage jumps our return up to &lt;code&gt;12%&lt;/code&gt;, which pushes strongly in the direction of giving later.&lt;/p&gt;

&lt;h1 id=&quot;extra-details&quot;&gt;Extra Details&lt;/h1&gt;

&lt;p&gt;There is much to consider about the investment rate vs. the discount rate, so I have tried to focus only on the most relevant details. One could add many caveats, complications, and additional assumptions to my analysis up to this point. In this part, I will address what I see as some of the most important ancillary considerations.&lt;/p&gt;

&lt;h2 id=&quot;qualitative-factors-may-tilt-the-scales-toward-giving-now&quot;&gt;Qualitative factors may tilt the scales toward giving now&lt;/h2&gt;

&lt;p&gt;Julia Wise’s &lt;a href=&quot;http://effective-altruism.com/ea/4e/giving_now_vs_later_a_summary/&quot;&gt;Giving Now vs. Later: A Summary&lt;/a&gt; lists some commonly cited arguments in each direction. My essay’s model incorporates most of the arguments for giving later, but doesn’t cover several of the arguments for giving now.&lt;/p&gt;

&lt;p&gt;Julia’s reasons to give now:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;You may get less altruistic as you age, so if you wait you may never actually donate.
Estimates of the returns on investment may be over-optimistic.
Giving to charities that can demonstrate their effectiveness provides an incentive for charities to get better at demonstrating that they’re effective. We can’t just wait for charities to improve — it takes donations to make that happen.
Having an active culture of giving encourages other people to give, too.
Better to eliminate problems as soon as possible. E.g. if we had eliminated smallpox in 1967 instead of 1977, many people would have been spared.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Julia’s reasons to give later:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;As time passes, we’ll probably have better information about which interventions work best.  Even in a few years, we may know a lot more than we do now and be able to give to better causes.
Investing money may yield more money to eventually donate.
When you’re young, you should invest in developing yourself and your career, which will let you help more later.
You can put donations in a donor-advised fund to ensure they will someday be given, even if you haven’t yet figured out where you want them to go.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Naturally, the importance of each of these considerations is a matter of debate; but the model does seem somewhat biased in favor of giving later because it does a worse job of accounting for the considerations for giving now.&lt;/p&gt;

&lt;h2 id=&quot;there-are-three-main-ways-of-giving-later-all-of-which-have-problems&quot;&gt;There are three main ways of giving later, all of which have problems&lt;/h2&gt;

&lt;p&gt;If you want to save your money to donate later, you essentially have three options for how to store it:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Keep your money in a taxable account.&lt;/li&gt;
  &lt;li&gt;Give it to a donor-advised fund (DAF).&lt;/li&gt;
  &lt;li&gt;Set up a foundation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first option offers the most flexibility, but it has one big problem: taxes. You can deduct donations from your income up to a certain limit (in the United States, you can deduct up to 30% of your income if you’re donating appreciated assets). If you donate a lot of money all at once instead of donating every year, you will likely hit the deduction limit and won’t get any tax savings on most of your donation.&lt;/p&gt;

&lt;p&gt;You can solve this by giving your money to a donor-advised fund. A DAF is a charity (usually a subsidiary of a brokerage firm like Fidelity or Schwab) that invests your money and can make grants based on your recommendations. You can continue investing your money once it’s in a DAF, but because the DAF is itself a charity, contributions are deductible.&lt;/p&gt;

&lt;p&gt;&lt;div&gt;DAFs have two problems. First, they offer limited investment options. If you can get positive alpha by managing your own investments, you lose out on that alpha by giving your money to a DAF. (Naturally, this is only a concern if you believe `&amp;alpha; &amp;gt; 0`.) On top of that, you have to pay a fee (usually about 0.6%) for the DAF to manage your money. Second, donor-advised funds have no obligation to follow your direction. There has been &lt;a href=&quot;https://www.philanthropy.com/article/Nevada-Court-Says/212899&quot;&gt;at least one instance&lt;/a&gt; where a DAF used a donor&apos;s funds for its own purposes, and the court ruled that this was permissible. I wouldn&apos;t expect a reputable firm like Fidelity to do something like that, but if you plan on keeping money in a DAF for a long time, it becomes a risk worth considering.&lt;/div&gt;&lt;/p&gt;

&lt;p&gt;The third option: set up a foundation to hold your money. This has high startup and overhead costs–it’s no coincidence that usually only high net worth people start foundations. In addition, foundations are legally required to disburse at least 5% of their funds per year, so this option isn’t so much “give later” as it is “give a little bit every year and then a lump sum later if you even have any left.”&lt;/p&gt;

&lt;div&gt;It&apos;s hard to say which option to prefer. Note that after about 50 years, the 0.6% fee on a DAF will exceed any tax benefits you would have gotten (depending on your tax rate). So if you plan on holding for longer than that, you should probably keep your money in a taxable account. If you believe `&amp;alpha;` is relatively large (3-5% or more), a taxable account looks preferable to a DAF even for relatively short time horizons like a decade.&lt;/div&gt;

&lt;p&gt;&lt;br /&gt;&lt;/p&gt;

&lt;h2 id=&quot;investing-looks-particularly-bad-right-now-but-only-under-certain-assumptions&quot;&gt;Investing looks particularly bad right now, but only under certain assumptions&lt;/h2&gt;

&lt;p&gt;Countries’ stock market returns in the long run have tended to cluster around 5%. Unfortunately, the market outlook today looks worse. Due to relatively low yields and high valuations, &lt;a href=&quot;https://interactive.researchaffiliates.com/asset-allocation&quot;&gt;Research Affiliates projects&lt;/a&gt; a 2.6% 10-year real return on the global stock market (as of August 2018). The projection for the US market looks even worse, at 0.2%. This may point in favor of donating now, at least until return expectations improve–although not necessarily, as discussed in &lt;a href=&quot;#should-you-give-later-if-you-expect-investment-returns-to-improve&quot;&gt;a different section&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note that the substandard global market outlook largely comes from the United States’ current high valuation, because the US currently accounts for about half the global market capitalization. Research Affiliates projects a relatively normal 5% return for global stocks excluding the US.&lt;/p&gt;

&lt;p&gt;Research Affiliates’ projections include three components: capital growth, net yield, and valuation. An efficient-markets hardliner would exclude the valuation component of these projections; doing so shifts the expected global return from 2.6% to 3.8% and the US return from 0.2% up to 3.1%.&lt;/p&gt;

&lt;p&gt;If you do believe valuation estimates tell us something useful, you can outperform the global market in expectation by seeking undervalued segments. Research Affiliates estimates a 6.8% average 10-year return for emerging markets due to low valuations. Or one could go even further with something like &lt;a href=&quot;http://cambriafunds.com/gval/&quot;&gt;GVAL&lt;/a&gt;, an ETF that buys stocks in the 25% cheapest out of the 45-ish countries that have investable public markets. Your estimate for projected future return can vary greatly based on which of these assumptions you make.&lt;/p&gt;

&lt;h2 id=&quot;should-you-give-later-if-you-expect-investment-returns-to-improve&quot;&gt;Should you give later if you expect investment returns to improve?&lt;/h2&gt;

&lt;p&gt;If you expect investment returns in the near future to underperform the donation discount rate but to improve in the future, you may prefer to save your money even though donating looks better in the short term. Consider an example. For simplicity, let’s say the discount rate &lt;code&gt;p&lt;/code&gt; is 2%, and the expected investment return &lt;code&gt;r&lt;/code&gt; is a flat 0% for the next year. But the year after that, &lt;code&gt;r&lt;/code&gt; will jump up to 10%. Suppose you donate $1 per year.&lt;/p&gt;

&lt;p&gt;If you donate your money at the beginning of this year, you will donate a total of $1 (obviously). If you wait a year to donate, your discounted donation will only be worth $0.98 in today’s dollars–a 0% investment return combined with a 2% discount. So for a one-year time horizon, donating now wins.&lt;/p&gt;

&lt;p&gt;If instead you wait two years and let the money keep compounding, the situation changes. The naive strategy is to donate immediately whenever &lt;code&gt;p &amp;gt; r&lt;/code&gt; and save your money when &lt;code&gt;r &amp;gt; p&lt;/code&gt;. Under our example, that means you donate at the beginning of the first year, and then for the second year you invest your money and donate at the end. The naive strategy results in a total present-value donation of $2.06 (a first donation worth $1, and a second donation that’s discounted at 2% for two years and earns 10% return for one year).&lt;/p&gt;

&lt;p&gt;However, if you wait two years before donating anything, the present value of your donation will reach $2.11. Your donation is worth more in this case because even though your first dollar loses an extra 2% due to time discounting, you make up for it by earning 10% return in the second year.&lt;/p&gt;

&lt;p&gt;The takeaway: if you expect to earn a subpar return for some years but expect the return to increase, such that your &lt;em&gt;average&lt;/em&gt; investment return will exceed the discount rate, you are better off waiting. This requires both that the investment return will increase and that you have a sufficiently long time horizon to take advantage of it. This has relevance for investors today given that the market will probably have a better average return over the next 30 years than over the next 10.&lt;/p&gt;

&lt;p&gt;Note that we cannot make the same argument in the opposite direction. If you expect investment returns to get worse at some point in the future, you should simply save your money until &lt;code&gt;r&lt;/code&gt; crosses below &lt;code&gt;p&lt;/code&gt;, at which time you should donate all your savings.&lt;/p&gt;

&lt;h2 id=&quot;what-if-donations-produce-compounding-returns&quot;&gt;What if donations produce compounding returns?&lt;/h2&gt;

&lt;p&gt;As argued by &lt;a href=&quot;https://blog.givewell.org/2007/01/06/limits-of-generosity/#comment-9&quot;&gt;Holden&lt;/a&gt; in 2007&lt;sup id=&quot;fnref:2&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;13&lt;/a&gt;&lt;/sup&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;[Y]ou get a much higher ‘return’ on charity than you get on investing. Both companies and the poor can create more value if you give them more money; the difference is that investing, where the value accrues to you, is way more popular than giving, where it doesn’t. Therefore, giving is ‘cheaper,’ and your expenditures today multiply more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Some have argued that well-directed donations will compound at a faster rate than investments. Paul Christiano &lt;a href=&quot;https://rationalaltruist.com/2013/03/12/giving-now-vs-later/&quot;&gt;responds&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Here’s the problem: if you give your money well, it might compound much faster than it would have in your bank account—but only for a while. Over time the positive effects will spread out more and more, across a broader and broader group of beneficiaries, until eventually you are just contributing to a representative basket of all human activities. Eventually, my investment will compound at the rate of world economic growth, rather than at the particularly promising interest rate I was able to originally identify.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;When I leave money in my bank account, it compounds slightly (1-2%, conservatively) faster than world economic growth. It does this for years, until I decide to spend it, for example on developing world aid. At that point it will earn anomalously high returns for a while, before being spread out throughout the world and then compounding in line with world economic growth. If I donate a year later, I earn 1 extra year of market returns, and 1 less year of compounding in line with economic growth. That’s a good deal—an extra 1-2%.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;[…]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
  &lt;p&gt;So if I’m considering donating to a cause like developing world aid, I shouldn’t donate sooner rather than later just because poor folks can earn higher returns than I can. I should instead look at the total multiplier I’m getting on my money, and donate if and only if I expect that multiplier is declining faster than [(market rates of return) – (economic growth)]. I don’t think we’ve found many causes for which this is plausibly the case, and I think that aid is definitely not one of them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As Christiano says, we only care about the compounding rate of donations if we expect that rate to decline in the future. We can think of the extra compounding as part of the value of the donation: when we donate, we produce some immediate value, plus some extra value due to compounding that eventually runs out. Sum up the total value produced (factoring in time discounting) and call that the value of your donation.&lt;/p&gt;

&lt;p&gt;For example, suppose donation A will produce $100 worth of value immediately but no compounding effects. Donation B will produce $30 of value immediately and produce compounding benefits over time, and by the time the compounding reverts to the base level growth rate, it will have produced $70 of time-discounted value. You can consider donations A and B equivalent, which means you don’t actually care if a donation has compounding effects for the purposes of giving now vs. later; all you care about is the total magnitude.&lt;/p&gt;

&lt;p&gt;It matters if the compounding portion of a donation made now has a bigger impact than the compounding portion of a donation made later:&lt;/p&gt;

&lt;div align=&quot;center&quot;&gt;&lt;img src=&quot;/assets/images/donation-compounding.png&quot; width=&quot;700px&quot; /&gt;&lt;/div&gt;

&lt;p&gt;We should be able to roughly account for this when looking at how global development opportunities will change in general; I don’t believe it requires any special consideration. Simply consider the value of the donation as the whole area under the curve.&lt;/p&gt;

&lt;h2 id=&quot;existential-risk-makes-giving-later-look-worse&quot;&gt;Existential risk makes giving later look worse&lt;/h2&gt;

&lt;p&gt;If civilization has a nontrivial probability of irrecoverably collapsing in the next century, that puts a blight on potential good done later. (You can’t help the global poor if everyone is dead.) You can account for this by discounting future donations by the annual probability of global catastrophe/extinction. The calculator I wrote does not offer an input for this, but you can account for existential risk (x-risk) by increasing the mean and standard deviation of &lt;code&gt;d&lt;/code&gt; based on your estimated annual probability of extinction and confidence in your estimate.&lt;/p&gt;

&lt;p&gt;I was unsure whether to consider this a central or ancillary concern. If you believe x-risk has a nontrivial probability, you might prefer to donate to x-risk prevention efforts rather than global poverty. On the other hand, you may believe x-risks are non-negligible but not know of any tractable solutions, and thus prefer to donate to global poverty. In that case, it would be reasonable to care about the discount rate on global poverty but also apply a discount based on the probability of extinction.&lt;/p&gt;

&lt;p&gt;That said, bringing up existential risk raises other complicating considerations. A global catastrophe that falls short of extinction might actually make future donations look &lt;em&gt;better&lt;/em&gt; than present donations. For example, climate change could make things much worse for poor people living in the most affected regions of the world, while you personally might remain insulated from the effects. Your donations will go further in this future world than they will today. So you might want to add a premium rather than a discount on future donations.&lt;/p&gt;

&lt;p&gt;Given the complexity of these considerations, I left them out of the main portion of the essay; but some readers may wish to take them into account.&lt;/p&gt;

&lt;h2 id=&quot;if-the-best-global-health-interventions-are-getting-used-up-aka-d--0-why-hasnt-the-bottom-decile-experienced-greater-income-growth&quot;&gt;If the best global health interventions are getting used up (a.k.a. &lt;code&gt;d &amp;gt; 0&lt;/code&gt;), why hasn’t the bottom decile experienced greater income growth?&lt;/h2&gt;

&lt;p&gt;I don’t really understand this. We could reach a couple of possible explanations:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Maybe there aren’t very many really good health interventions–only a small percent of the global poor can be helped more cheaply than cash transfers–so the benefits they provide don’t show up in aggregate statistics.&lt;/li&gt;
  &lt;li&gt;Maybe the best health interventions improve people’s lives without affecting their income growth.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first explanation seems unlikely because health outcomes for the global poor have improved dramatically since 1998 (and in 1998 they had improved equally dramatically over 1978). If the global poor are getting healthier, you would expect to find lots of opportunities to help them get healthier even faster.&lt;/p&gt;

&lt;p&gt;The second explanation would be somewhat surprising if true–improving people’s health outcomes should help them achieve their goals better, which should translate into higher income.&lt;/p&gt;

&lt;p&gt;I don’t find either of these explanations satisfying. Perhaps they’re both partially true. Perhaps there’s some dominant third factor that I haven’t thought of. I do not consider this question hugely important to this essay, but it’s a related puzzle. Perhaps someone who knows more than I do about global development can answer it.&lt;/p&gt;

&lt;h1 id=&quot;concluding-remarks&quot;&gt;Concluding Remarks&lt;/h1&gt;

&lt;p&gt;When I began writing this essay, I initially expected to find that giving now looked better than giving later. But according to this model, using my best estimates of the input variables, giving later beats giving now with unexpectedly high probability. That said, some qualitative factors not accounted for by the model push toward giving now. And of course, for cause areas other than global poverty, the analysis would look quite different.&lt;/p&gt;

&lt;p&gt;The main takeaway should be not so much an answer to the now vs. later question as a general approach. Namely, it is possible to break down the investment rate and discount rate to get a more accurate answer, as well as a sense of how that answer varies with different inputs. Robin Hanson’s &lt;a href=&quot;http://www.overcomingbias.com/2013/04/more-now-means-less-later.html&quot;&gt;claim&lt;/a&gt;–that &lt;code&gt;r &amp;gt; g&lt;/code&gt; and therefore we should give later–provides a reasonable starting point, and we can get a much better picture by examining the factors that influence the investment rate and discount rate. This analysis certainly doesn’t include every important consideration, but I hope it can add some clarity to the question of giving now vs. later for global poverty interventions.&lt;/p&gt;

&lt;p&gt;Thanks to Linda Neavel Dickens and Jake McKinnon for reviewing drafts of this essay.&lt;/p&gt;


&lt;h1 id=&quot;notes&quot;&gt;Notes&lt;/h1&gt;

&lt;!--  LocalWords:  Buffett
 --&gt;
&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;In practice it’s hard to ensure that your money gets put to good use long after you’re dead. For more on this, see Hanson, &lt;a href=&quot;http://www.overcomingbias.com/2011/09/let-us-give-to-future.html&quot;&gt;Let Us Give To Future&lt;/a&gt;. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:9&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Strictly speaking, &lt;code&gt;i&lt;/code&gt; doesn’t have to grow with GDP. Companies may choose to use some or all of their free cash flow to pay out dividends, so &lt;code&gt;i&lt;/code&gt; depends on what portion of free cash flow companies return to shareholders, as well as what portion of GDP ends up as corporate free cash flow. These portions may fluctuate somewhat over time, but they should stay within a fairly narrow range; and in the long run the ratio of shareholder yield to GDP should remain stable. &lt;a href=&quot;#fnref:9&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:4&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;GiveWell’s 2018 &lt;a href=&quot;https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/cost-effectiveness-models&quot;&gt;cost-effectiveness analysis&lt;/a&gt; assumes that utility is logarithmic with consumption. (Note that consumption and income can be considered roughly the same thing for our purposes.) See also: Betsey Stevenson &amp;amp; Justin Wolfers (2013), &lt;a href=&quot;https://web.archive.org/web/20160610202901/http://www.brookings.edu/~/media/research/files/papers/2013/04/subjective%20well%20being%20income/subjective%20well%20being%20income.pdf&quot;&gt;“Subjective Well-Being and Income: Is There Any Evidence of Satiation?”&lt;/a&gt; National Bureau of Economic Research. &lt;a href=&quot;#fnref:4&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:5&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Homi Kharas &amp;amp; Brina Seidel (2018). &lt;a href=&quot;https://www.brookings.edu/wp-content/uploads/2018/04/workingpaper114-elephantchartrevisited.pdf&quot;&gt;“What’s Happening to the World Income Distribution?: The Elephant Chart Revisited”&lt;/a&gt;. Brookings Institution. &lt;a href=&quot;#fnref:5&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:5:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:10&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Meb Faber (2015). &lt;a href=&quot;https://www.amazon.com/Global-Asset-Allocation-Survey-Strategies-ebook/dp/B00TYY3F3C&quot;&gt;Global Asset Allocation.&lt;/a&gt; Downloadable for free at https://mebfaber.com/books/ as of this writing. &lt;a href=&quot;#fnref:10&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:10:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:11&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Thomas Piketty (2013). &lt;a href=&quot;https://en.wikipedia.org/wiki/Capital_in_the_Twenty-First_Century&quot;&gt;Capital in the Twenty-First Century.&lt;/a&gt; ISBN 978-0674430006. &lt;a href=&quot;#fnref:11&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt; &lt;a href=&quot;#fnref:11:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:12&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Wesley Gray, Jack Vogel, and David Foulke (2015). &lt;a href=&quot;https://www.amazon.com/DIY-Financial-Advisor-Solution-Protect/dp/111907150X&quot;&gt;DIY Financial Advisor.&lt;/a&gt; ISBN 978-1119071501. See also &lt;a href=&quot;https://alphaarchitect.com/2017/06/06/the-value-momentum-trend-philosophy/&quot;&gt;The Value Momentum Trend Philosophy&lt;/a&gt;, a blog post by Wesley Gray on similar topics. &lt;a href=&quot;#fnref:12&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:8&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Income inequality sometimes refers to the differential between the richest and the poorest people. But for our purposes, that’s not an important comparison because we use the investment rate of return as a more precise estimate of what rich people (a.k.a. we) can earn. So I define income inequality as the relative difference between the poorest people and the global average. &lt;a href=&quot;#fnref:8&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:14&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;This is specifically for recipients in Kenya. &lt;a href=&quot;#fnref:14&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:6&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Assuming there are sufficiently many effectiveness-minded donors, which appears to be the case, large donors such as government aid programs and major foundations currently do a reasonable job of identifying and funding the best global health interventions. &lt;a href=&quot;#fnref:6&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:7&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;Methodology: GiveWell presents cost-effectiveness estimates from 11 different GiveWell researchers and then takes the median for each charity. I then took the median of all the charities’ medians, excluding GiveDirectly (because that’s the baseline we are comparing to) and Against Malaria Foundation/Malaria Consortium (to avoid concerns about &lt;a href=&quot;https://mdickens.me/2016/05/16/givewell&apos;s_charity_recommendations_require_taking_an_unusual_stance_on_population_ethics/&quot;&gt;population ethics&lt;/a&gt;). Including the malaria charities changes the median from 5.1 to 5.3. &lt;a href=&quot;#fnref:7&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:13&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;What if input parameter values are not normally distributed? Possibly, the parameters have fatter-than-normal distributions or they’re skewed rather than symmetric. If parameters have fat-tailed distributions, that does not decrease the certainty of the model result, but it does increase the probability that the &lt;code&gt;|r - p|&lt;/code&gt; spread is large. &lt;br /&gt;&lt;br /&gt;If any parameters have skewed distributions, that could affect the outcome in a variety of hard-to-predict ways. More generally, if we move away from normal distributions, we need the model to specify much more information about the shape of the distribution for each parameter. I believe this provides little value given the effort required, so I have stuck with the assumption that all inputs follow normal distributions. &lt;a href=&quot;#fnref:13&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;I do not know whether Holden today still endorses his 2007 belief. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;

                </description>
			</item>
		
	</channel>
</rss>
