TLDR: I basically don’t.

Contents

Ethical concerns

If you stand to make money from AI, that incentivizes you to speed up AI development, and it disincentivizes you from taking actions that might cause your investments to go down—for example, donating to nonprofits that work to prevent superintelligent AI from being built. Conflicts of interest can create bias that people aren’t consciously aware of, even when we have strong incentives to get the right answer.123 I suspect that a significant portion of AI safety people’s opposition to banning ASI is that they have financial conflicts of interest.4 This has made AI safety progress harder than it would’ve been if humans were immune to this sort of motivation. I don’t want that to happen to me.

My conclusion in this post is that I’m not going to invest directly in transformative AI. But if I came to a different conclusion, I would want to have a strong theory of why I won’t be corrupted if I stand to make money off of increasing extinction risk. If I couldn’t come up with any such theory, then I wouldn’t change my investments.

When I wrote previously about investing for transformative AI, I didn’t much think about the corrupting influence of money—I was too focused on the academic question of what to do in theory. That was a mistake.

Then there’s the fact that investing in AI companies gives the companies more power. This is a problem for private companies but I’m not too concerned about it for public mega-cap companies because they already have so much access to capital. And it’s not a concern if you invest in ways that don’t directly help AI companies. For example, TAI will likely raise interest rates, which suggests that investors can profit by short-selling bonds; if we do that, interest rates will go up (by a teeny tiny bit), which hurts AI companies (by a teeny tiny bit).

Thoughts on how to avoid becoming corrupted

One’s ability to act ethically, and even think clearly, becomes corrupted when there is a direct connection between behaving rightly and losing money. If I invest directly in an AI company, then I suddenly have a strong financial incentive to believe that anything that company does is good for the world. I have an incentive to disbelieve that the company is increasing risk.

I’m less concerned about indirect investments. If I expect TAI to raise interest rates and I short bonds as a result, then there’s not much I can personally do to affect interest rates because they are such a large-scale phenomenon, and they move around for all sorts of reasons.

Possible strategies for avoiding bad incentives:

  1. Commit to donating all the money you make from investing in AI. (Ideally, make it a binding commitment somehow.)
  2. Put a cap on how much you invest in AI (say 10% of your wealth), and sell down your investments if they grow beyond that cap.
  3. Be so worried about AI-driven extinction that it overpowers your desire to make money. (I’m not sure how actionable this strategy is.)

Implementing these strategies still wouldn’t give me much confidence that I’m protected against bad motivations. I don’t know how to avoid becoming corrupted.

For more discussion on this topic, see How to (hopefully ethically) make money off of AGI, under the heading “Is any of this ethical or sanity-promoting?”

Anyway, this concern hasn’t been relevant for me because I decided not to invest directly in AI, and I made that decision before doing any serious thinking about how to avoid becoming corrupted, so I never had to figure out an answer.

Future worlds

Ethical issues aside, the rest of this post describes how I think about investing for transformative AI, and why I don’t specifically buy any AI-related investments.

Here are my (rough) credences for how the near future goes:

  • 40% chance misaligned AI kills everyone.5
  • 30% chance we don’t get TAI soon: either TAI turns out to be hard to build, or humanity collectively realizes how insane it is to rush to TAI and decides to slow down.
  • 20% chance AI renders money meaningless—because it ushers in a post-scarcity utopia, or because the creator of the first superhuman AI uses it to confiscate all wealth, or something like that.
  • 5% chance we get transformative AI, money still matters, and TAI boosts the economy across the board and ~all stocks go to the moon.
  • 5% chance we get TAI, money still matters, and gains are concentrated in certain sectors of the economy.

Investing for TAI matters most in the last scenario, where money still matters but returns are concentrated.6 That’s only 5% of worlds. If AI kills everyone or renders money meaningless, investing strategy doesn’t matter after that point—but it might matter before.7

What happens in the lead-up to ASI?

Money still has value when TAI is getting close, but hasn’t arrived yet. Making more money pre-TAI means you can donate more.

The time from “AI has huge economic impacts” to “AI is powerful enough to kill everyone” may be short—I’d guess maybe a 75% chance that it’s less than five years, and 60% chance that it’s less than three years.8 It’s hard to spend money well in such a short time because four things need to happen in sequence:

Your investments go up

→ You donate your earnings

→ That money gets spent on useful activities

→ Those activities reduce existential risk

If you get rich three years before ASI arrives, it might already be too late for your money to make a difference on x-risk.

If you’re making active bets to beat the market, then the market has to anticipate TAI before it’s too late, but after you do. (Maybe investing for TAI five years ago would’ve worked, but maybe now it’s priced in.9)

You can only make money pre-TAI if the market comes to agree with you, which it might not. If you have a high P(doom), that implies a high interest rate because money is more valuable now than later. But that doesn’t mean the market interest rate will go up, because the market might never expect AI to kill everyone until it does. For longer arguments to this effect, see this comment and this sub-comment by Eliezer Yudkowsky.

So there’s a better chance that money matters close to TAI than after TAI, but it still seems hard to both make money and use money before TAI arrives. There’s a direct tradeoff: the closer you get to TAI, the more likely it is that the market has caught up to your expectations, and thus the more likely it is that you’ve made a lot of money; but also, the more difficult it is to use that money well.10

Predictions are hard, especially about markets

Some people have made a lot of money from investing in AI-related stocks. For example, Peter McCluskey is a trader who’s written about his AI investments; I think he knows what he’s doing.

I don’t trust my ability to beat the market on specific predictions like that. It’s hard to out-predict the market—very few people succeed in the long run. Even if you have a track record of successful predictions, those predictions mean less than you might think.

(If you make 100 bets on AI stocks and 90 of them beat the market, at first glance that looks like really strong evidence. But it’s not, because all of those bets are correlated.)

Five or ten years ago, some people in the AI safety space were talking about investing in AI. Some of them made a lot of money. But were they right, or were they lucky? I’m not confident either way. My main reason for doubt is that most of the time, when I see people writing about how AI advances will make AI stocks go up, their described mental models look simplistic to me.

For example, a few years ago, some people predicted that AI advances would cause Nvidia’s stock to go up. That prediction came true. But what if Nvidia’s margins had gotten squeezed? What if AMD or Intel had caught up to Nvidia and out-competed it? What if AI developers had replaced Nvidia with their own in-house chips? I never saw anyone give a good explanation for why those things wouldn’t happen.

(I referenced Peter McCluskey as a positive example because I know he thinks about those sorts of possibilities.)

Or: For OpenAI to meet its 5-year revenue projections, it needs to compound at 108% per year. How many times in history has a large-cap company sustained a growth rate that high?

Answer: Zero.11

Not to say it’s impossible. OpenAI reached 100 million users faster than any company ever had before. But predicting OpenAI to meet its revenue goals is a strong claim that demands a strong justification, and I’m not satisfied with the justifications I’ve seen.

For more on this subject, see Value Investing in the Age of AGI, particularly under the heading Defenses of value investing.

I haven’t done careful fundamental analysis on AI-related investments, and I’d be wary of making those investments without doing so first. So I don’t try to beat the market by making specific predictions. I do try to beat the market, but rather than relying on personal judgment, I systematically invest in factors where there is reliable evidence that they have had positive returns. I’m relying on my ability to assess the scientific literature, but not on my ability to predict individual trades.

Trend-following

Trend-following investing12 is one of those factors I invest in. It worked well historically, and there’s reason to expect it to continue to work13. The basic idea of trend-following is you buy assets that are trending up and short assets that are trending down.

(There’s also trend-following’s cousin, the momentum factor; everything I say in this section about trend-following also applies to momentum.)

There’s no definitive explanation for why trend-following works, but one popular hypothesis is that markets react too slowly to new information.14 Trend-followers enter positions after the smart money, but before everyone else.

If that hypothesis is correct, then trend-following is a way to follow the smart money without having to make any specific predictions. The disadvantage is that you’ll never be first—you’ll never be the investor who profits the most off of a trend.

There are two key advantages:

  1. You don’t need to figure out whether you can make market-beating predictions, or avoid all the biases involved in assessing your own abilities. You can follow a simple, objectively-testable strategy.
  2. You’re not limited to a single area of expertise. If AI experts predict markets to move a certain way and produce a trend, you follow that trend. If, say, Japan geopolitics experts predict a move in the Yen, you follow that trend, too.

I find trend-following especially appealing in light of my belief that AI advancements will make big changes somewhere, but it’s hard for me to predict where. A trend-following strategy will pick up the trends wherever they arise.

Previously, I wrote about value investing. Value and trend make good complements: value investing works when things go back to normal; when abnormal growth reverts to the mean; when investors make overconfident predictions. Trend-following works when winners keep winning; when investors under-react to changing conditions.

The EA portfolio

Philanthropists care not just about their own money, but about the aggregate portfolio of all value-aligned donors.

Even knowing in retrospect how well AI stocks did over the last 5 years, I mostly feel fine about the fact that I “missed out”, because other EAs invested heavily in AI. If Anthropic IPOs soon and doesn’t crash before then, more than 50% of EA wealth will soon be in AI stocks. Beyond that, many EAs already invest on the thesis that AI will be transformative.

People often pay attention to expected returns but overlook expected risk. Concentrated investments are much riskier than you might think. I’m wary of concentrating too much in bets on the transformative-AI thesis.

Leaning my investments in the right direction

I haven’t made any big changes due to transformative AI, but on decisions where I’m ambivalent, it can push me a bit in one direction. Zvi made this point in On AI and Interest Rates:

Did I bet on low interest rates in 2021 by locking in a 2.5% 30-year (!?!) fixed rate mortgage, thereby making more money off that than I have made from all other income combined since then? Yes. Yes I did.

That mostly wasn’t about AI. The trade was absurd anyway. AI simply made me more excited to pursue it, get it done and size it larger. That is core to my betting and trading strategy (not investment advice!). Look for plays that have multiple reasons to do them (often below the threshold that is worthwhile on its own) and that avoid similar reasons not to do them.

Here are some ways AI gives me another reason to do something I was going to do anyway:

  • I already have a large allocation to trend-following due to its low correlation and right skew, but AI is another reason to hold it.
  • I’m ambivalent on whether I should hold bonds. I already didn’t hold them15, but AI gives another reason not to, because AI advances should cause interest rates to go up (and therefore bond prices to go down).
  • AI makes the value factor look a bit worse and the momentum factor look a bit better. I have 50/50 weighting between value and momentum; basic theory says I should have risk parity weighting, which is more like 60/40.
  • My equity investments are primarily long-only rather than long/short, for various reasons. One concern about long/short is that AI could cause an extreme tail event that blows up the short side. (It’s just as likely to blow up the long side, but the payoff is asymmetric.)

Appendix: Some specific predictions

I’m not knowledgeable enough to make bold predictions about how transformative AI will affect financial markets. But just for fun, I will make some weak predictions based on my best guesses. I didn’t come up with most of these on my own—I heard them from other sources, such as How to (hopefully ethically) make money off of AGI. But a few are original to me.

  • Real interest rates go up (and therefore bonds go down) — TAI increases return on capital and therefore borrowers are willing to pay more. Alternatively, interest rates go up because discount rates go up because extinction looks likely, but that only happens if the market prices in extinction risk, which it might not.
  • Stocks beat their historical average — TAI improves companies’ ability to generate profit. If true, this implies that deep out-of-the-money call options will perform especially well.
  • AI stocks’ earnings growth outpaces the market — companies in the AI supply chain will capture a significant chunk of the productivity gains from AI.
  • AI stocks beat the market — this is a weaker prediction than the previous one, given that AI stocks are already priced with an expectation of strong earnings growth.
  • Real commodity prices go up — TAI makes most goods and services cheaper, but it can’t conjure oil or copper out of nothing.
  • Cheap real estate goes up — as with commodities, TAI can’t make more land.
  • Expensive real estate goes down — people want to live in Manhattan because that’s where the high-paying jobs are. If those jobs are replaced by AI, people won’t want to keep paying $4000/month for a crappy studio apartment.
  • The value factor doesn’t work — see Value Investing in the Age of AGI.
  • The momentum and trend factors work — see above.
  • Regarding net inflation, I have no clue what will happen — “knowledge economy” goods should become cheaper as AI drives down the cost of production; but commodities, real estate, and other fixed goods should become more expensive; and central bank intervention will dampen extremes in either direction.
  • Inflation volatility will go up — the CPI will change dramatically in the short term, but it’s hard to predict which way it will go.

Notes

  1. Moore, D. A., Tanlu, L., & Bazerman, M. H. (2010). Conflict of interest and the intrusion of bias. 

  2. Govindan, P., Chung, E., & Pechenkina, A. (2025). Money Can’t Buy Accuracy: Incentives Fail to Reduce the Confirmation Bias in Data Interpretation. 

  3. Mayraz, G. (2011). Wishful Thinking. 

  4. Non-financial conflicts of interest may matter more; if you’re an ML engineer, you want to be able to keep doing ML on frontier AI systems, and banning AI means you can’t do that anymore. 

  5. Astute readers may recall last week when I quoted a 50% chance, not 40%. That’s because:

    • I don’t take the exact numbers particularly seriously. I don’t have a strong view on whether the risk is 40% or 50%. (I definitely don’t believe it’s 10% or 90%.)
    • The “20% chance AI renders money meaningless” scenario also includes some catastrophically bad outcomes. When you add those in, you might push up the probability from 40% to 50%(ish).

  6. Technically, it also matters in the penultimate scenario, where TAI makes the whole stock market go up. In that case, you’re pretty much guaranteed to make money, but there are ways you could change your investments to make even more money. Money has diminishing utility, so it’s less important to make money when you’re already making a lot of money. 

  7. Some of these scenarios are continuous, not discrete. For example, there’s a continuum between “TAI boosts the whole market” and “gains are concentrated in certain sectors”. Surely some sectors will benefit more than others; it’s just a question of how concentrated the gains are. I’d think of the two scenarios as “I’m not upset that I invested in index funds instead of AI stocks” vs. “I am upset”. 

  8. Metaculus gives lower odds than I do. 

  9. I was also thinking about this five years ago, but I didn’t write about it, and I didn’t change my investing strategy. I regret how long it’s taken me to get around to writing this post. I’m not sure whether I should regret that I didn’t change my investing strategy. 

  10. At least that’s true once TAI is close. Money 10 years pre-TAI might be more valuable than money 20 years pre-TAI because you have more information about how to spend it. 

  11. Mauboussin, M. & Callahan, D. (2026). Bayes and Base Rates: How History Can Guide Our Assessment of the Future. 

  12. Hurst, B., Ooi, Y. H., & Pedersen, L. H. (2017). A Century of Evidence on Trend-Following Investing. 

  13. Babu, A., Hoffman, B., Levine, A., Ooi, Y. H., Schroeder, S., & Stamelos, E. (2019). You Can’t Always Trend When You Want. 

  14. For more on this hypothesis, see:

    Berger, A., Israel, R., & Moskowitz, T. (2009). The Case for Momentum Investing. Section heading “Possible Explanations of Momentum.”

    Goyal, A., Jegadeesh, N., & Subrahmanyam, A. (2024). Empirical determinants of momentum: a perspective using international data. 

  15. at least I don’t have a static holding; sometimes I hold bonds dynamically as part of a long/short strategy