How Much Does It Cost to Offset an LLM Subscription?
Is moral offsetting a good idea? Is it ethical to spend money on something harmful, and then donate to a charity that works to counteract those harms?
I’m not going to answer that question. Instead I’m going to ask a different question: if you use an LLM, how much do you have to donate to AI safety to offset the harm of using an LLM?
I can’t give a definitive answer, of course. But I can make an educated guess, and my educated guess is that for every $1 spent on an LLM subscription, you need to donate $0.87 to AI safety charities.
First things first: Why do I believe it’s harmful to buy an LLM subscription?
Paying money to a frontier AI company increases their revenue, and they spend some of that revenue on building more powerful AI systems. Eventually, they build a superintelligent AI. That AI has a good chance of being misaligned and then killing everyone in the world. When you buy an LLM subscription, you cause that to happen slightly faster.
But you can also donate to nonprofits that are working to prevent AI from killing everyone. How much do you need to donate to a nonprofit to offset the harm of a $20/month LLM subscription?
I built a simple Squiggle model to answer that question.
The model
Four key facts:
- When you give a company an additional dollar of revenue, that raises its future valuation by some number.
- A higher valuation lets the company raise more capital and thus spend some additional amount of money.
- AI companies will spend a total of some amount in 2026.
- Meanwhile, it would take some amount of money directed to AI safety nonprofits to cancel out the harm of AI companies’ spending.
From those, you can estimate how much you need to donate using the following procedure:
- Start from the dollar value of your subscription.
- Calculate how much that will increase company valuation.
- Translate increased valuation into increased expenditures.
- Divide by expected total expenditures of frontier AI companies.
- Multiply by expected total cost to offset AI company harm.
The resulting number is the amount to donate to AI safety nonprofits.
There are some difficult questions that this model avoids having to answer:
- We don’t care what proportion of AI company spending goes to R&D on frontier models; we only care about total spending.
- We don’t care to what extent x-risk is increased or decreased per dollar spent.
The inputs
The model has four inputs: (1) the revenue-to-valuation ratio; (2) the valuation-to-expenditures ratio; (3) frontier AI company expenditures; (4) total cost to offset AI company harm. In this section, I will explain how I estimated the values of those inputs.
Revenue-to-valuation ratio
How much does a dollar of revenue raise an AI company’s valuation? I can see arguments for both “hardly at all” and “a lot”.
In favor of “hardly at all”: VCs give AI companies funding on the expectation that their products will be incredibly useful in the future, which doesn’t have much to do with current revenue.
In favor of “a lot”: AI companies raise funding at high revenue multiples, e.g. Anthropic raised its last round (as of September 2025) at 36x revenue (source). This could mean that VCs expect $1 of revenue today to convert to $36 in future value, i.e. revenue has a 36:1 multiplier effect.
A typical 2025 startup valuation is 7x revenue (source). As a median estimate, we could say that $1 of AI company revenue converts to $7 of valuation, and the extra 5x multiplier is driven by high expectations for future AI products.
(I briefly looked into how startup funding scales with revenue and I didn’t find any useful evidence.)
Growth rate matters more for valuation than revenue does, but I don’t think this changes the calculation in the short term because an extra $1 of 2025 revenue also represents an extra $1 in growth relative to 2024 revenue.
Valuation-to-expenditures ratio
How much does $1 of company valuation translate into increased expenditures?
Private companies don’t usually publish that information. But based on historical data for AI companies and general trends for startups, it’s reasonable to expect companies to raise capital equal to 5% to 20% of the valuation.
(I’m thinking of AI companies as startups; “startup” connotes “small”, which they clearly aren’t, but I’m using the term in the Paul Graham startup = growth sense. Frontier AI companies are startups because they’re growing fast.)
Frontier AI company expenditures
Public data on AI company fundraising in 2025:
- Anthropic: $13B
- OpenAI: $40B
- xAI: $10B maybe? (the publicly available data only shows total funding, not individual rounds)
Assume these three companies account for half of AI spending and that the funding they raised will last 18 months; that means AI companies will spend $66B in 2026.
Total cost to offset AI company harm
This is the hardest number to estimate. My assumption is that the AI safety community currently spends on the order of $30 million to $100 million per year, and if we spent on the order of 10–100x more, then that would be enough to fully offset the harms of AI companies.
I suspect that spending 100x more on pure alignment research would not be enough. But spending 100x more would likely be enough if some of the spending goes to governance/policy/advocacy, and some goes to things that have multiplier effects (e.g. you could spend $1 to cause AI companies to contribute $10 more to safety research). I’m also assuming you can make AI safe merely by throwing money at the problem, which is clearly false, but it makes sense to assume it’s true for the purposes of this model.
The answer (according to my model)
Put all those numbers together and the model spits out a mean cost of $0.87 in donations for every $1 spent on LLM subscriptions. That means for a $20/month subscription, according to the model you’d need to donate $17/month to AI safety orgs.
The model’s median estimate is only $0.06—which is to say, an LLM subscription probably only does a little bit of harm. But there is a small probability that you need to donate quite a bit more to offset your LLM usage, so the expected cost is much higher at $0.87.
Limitations of the model
Like any model, this one does not perfectly match reality. Some examples of problems this model has:
- I have no clue what the total cost is to offset the harm of AI companies.
- The model assumes they money you donate does as much good as the average dollar spent on AI safety. But maybe your dollars can be above average. (Or they could even be below average.)
- Maybe giving more money to Anthropic is good actually, because Anthropic is the least unsafe AI company and speeding them up improves our chances.1
- Is moral offsetting even okay? Maybe we should obey a rule-utilitarian constraint against doing bad things, even if we offset them. Or maybe moral offsetting is silly and we should just donate to whatever charity is most effective.
Conclusion
I have a subscription to Claude. Last year I donated a lot of money to AI safety but I didn’t make any donations specifically for offsetting. Having put more thought into it to write this post, I think I will start donating an extra $240/year—$1 donated for every $1 spent on Claude. My model suggested donating 87 cents per dollar, but the model isn’t that precise, and $1-per-dollar is a nice round number. I’m still undecided on whether the concept of moral offsetting makes sense, but I figure I might as well do it.
Notes
-
In my first draft, I also said it might be net good to give AI companies money if they’ll use some of it on alignment research. But on reflection, I’m pretty sure that’s wrong, because giving them money speeds up AI progress, and there’s no strong reason to expect that increasing AI company revenue will increase total expenditures on alignment.
I also expect it’s bad to speed up Anthropic, but I’m not confident about that. ↩