A Complete Quantitative Model for Cause Selection
Part of a series on quantitative models for cause selection.
Update: There’s now a web app that can do everything the spreadsheet could and more.
Quantitative models offer a superior approach in determining which interventions to support. However, naive cost-effectiveness estimates have big problems. In particular:
- They don’t give stronger consideration to more robust estimates.
- They don’t always account for all relevant factors.
We can fix the first problem by starting from a prior distribution and updating based on evidence–more robust evidence will cause a bigger probability update. And we can fix the second problem by carefully considering the most important effects of interventions and developing a strong quantitative estimate that incorporates all of them.
So that’s what I did. I developed a quantitative model for comparing interventions and wrote a spreadsheet to implement it.
I see this model as having a few broad uses:
- Making cause prioritization decisions
- Helping people understand where they disagree most strongly
- Making explicit the tradeoff between estimated expected value and robustness
- Figuring out the implications of people’s beliefs about cause prioritization
Although this model isn’t perfect, I believe it makes sense to use quantitative models to make decisions. I will use this model to make decisions about where to donate, so I’ve put a lot of thought into the inputs, but there’s certainly room for improvement. I want to get other people’s suggestions to make sure I build a model that’s as useful as possible.
If you want to use the spreadsheet, please first read the instructions in the section below.
Contents
- Contents
- Using the Spreadsheet
- The Statistics Behind My Quantitative Model
- Preliminary Results
- Discussion
- Notes
Using the Spreadsheet
You can find the most recent version of the model in this Excel spreadsheet, where you can examine it in detail or change the inputs. You can examine my model in detail or change the inputs by downloading the spreadsheet.
Some of the functionality of the spreadsheet only works in Excel. If there’s enough demand, I may write a port to Google Sheets.
The remainder of this section describes how to use the spreadsheet. If anything is still unclear, let me know because I’m happy to answer questions–I built this so people can use it, so I want to make users have the information they need!
You might want to do something with this spreadsheet that it doesn’t allow you to do. If you make substantial modifications, send me the result so I can see what you came up with. If you don’t want to modify it yourself, you can tell me what you’d like to do and I’ll add it to the spreadsheet, as long as it’s a feasible change.
General Structure
The spreadsheet is organized into sections, with each section on a different sheet (you can switch among them with the tabs at the bottom of the page). When you open the spreadsheet, Excel may prompt you asking if you want to enable macros. You need to enable macros for the calculations to work properly.
Most inputs contain five columns: Name, Low CI, High CI, Mu, Sigma^2, Notes. Low/High CI give the bounds on your 80% credence interval for that value. Mu and Sigma^2 are calculated automatically so you can ignore those. Notes contains any potentially relevant information about that value; if you’re confused about a value, look to see if there’s anything useful in the Notes column.
“Summary” sheet
This sheet shows all the posteriors for different interventions in one place so you can compare them side by side. You shouldn’t need to modify anything on this sheet.
“Globals” sheet
This is the main sheet you’ll want to modify. It contains all the values used by multiple other sheets. It has several sections, described below.
Enable calculations?
This is the most important section if you want to modify any inputs. Calculating posteriors takes a long time (about 30 seconds on my computer), and by default, Excel will try to re-calculate them every time you change the value of a cell. This makes it tedious to change multiple inputs at once. If you want to change several inputs, set “Enable calculations?” to 0, and Excel won’t try to re-calculate anything. Once you’re done making changes, set “Enable calculations?” back to 1.
Prior
This section gives the parameters for the prior distribution over interventions. A prior describes your belief about the effectiveness of an intervention in a category; see my original post and On Priors for more details.
A prior of 1 means that you believe the median intervention in the relevant reference class is as effective as GiveDirectly. There are several priors for different types of interventions. You can set all the priors to be equal if you don’t believe it’s reasonable to differentiate interventions like this; but if you believe some interventions have higher priors than others, you can use this section to change the priors for each category.
Color guide
Explains the meanings of the cell colors.
Other sections
The other sections include model inputs. Each field gives a credence interval where you put your low and high estimates for each of the inputs. Change these however you want.
“EV of Far Future” sheet
Here we compute the expected value (EV) of the far future. This sheet gets all its data from Globals, so you don’t need to modify anything here.
Cause sheets
Each individual cause has its own sheet. The value of each cause is computed differently, so each sheet has a space to provide the inputs into the model. You can modify any of the pink-colored cells to whatever values you believe are correct.
Debugging FAQ
Some cells just say “disabled.”
You need to enable calculations to see results. See “Enable calculations?” above.
I enabled calculations, but a lot of cells still say #VALUE!
.
Some cells don’t automatically re-calculate. You may need to manually re-calculate these by double-clicking on them and pressing Enter, or by clicking the formula bar and then pressing Enter. I’ve colored all such cells light purple to make them easier to spot. You may also need to re-calculate posteriors (which are colored light blue or orange).
I changed the inputs but the posterior is still the same.
This is the same problem as the previous question; see above.
What does one of the inputs mean?
The inputs have a Notes column. For some of the less clear inputs, I put notes to explain what the input means. If anything is still unclear, let me know and I’ll try to clarify it.
The Statistics Behind My Quantitative Model
This section goes into the details of how I compute the outputs in my quantitative model. I’m keeping this separate since I expect most readers won’t be interested in the details, but I believe it’s important to publish. My knowledge of statistics is fairly weak, and if I describe my process, there’s a much better chance that someone can discover any mistakes I might have made.
This part assumes you have some preliminary knowledge of probability and statistics.
Basic Process
A previous post described the general outline of my process. Here’s a summary of how it works:
- Every intervention produces some utility per unit of resources. Start with a prior distribution over interventions.
- For a particular intervention, we make an estimate of its expected utility and gain a sense of how precise the estimate is.
- We use the prior and the estimate to get a posterior expected value.
- Then we donate to whichever charity has the best posterior expected value.
Our prior follows a log-normal distribution. I wrote a post justifying this choice and deciding what parameters to use (a log-normal distribution has parameters μ and σ which roughly correspond to mean and standard deviation). The estimates follow a log-normal distribution as well as the prior; this appears more obviously correct so I haven’t spent much time trying to justify it1.
The formula for posterior expected value is as follows. U
is the true utility and M
is our measurement (i.e. the estimated expected value).
\begin{align} \text{E}[U|M=m] = \int_0^\infty u \cdot \text{p}(U=u|M=m)\ du \end{align}
By Bayes’ Theorem, p(U=u|M=m)
is given by
\begin{align} \text{p}(U=u|M=m) = \text{p}(U=u) \displaystyle\frac{\text{p}(M=m|U=u)}{\text{p}(M=m)} \end{align}
where p(U=u)
follows the prior distribution and p(M=m|U=u)
follows a log-normal distribution with parameters μ = u
and σ as given by the measurement’s variance. p(M=m)
is calculated as
\begin{align} \text{p}(M = m) = \int_0^\infty \text{p}(U = u) \text{p}(U = u|M = m) du \end{align}
Computing integrals in practice
The posterior expected value doesn’t have a closed-form solution as far as I know. To compute it, we have to integrate up to very large values of u
(I go up to about 1075). The traditional way to numerically perform integrals is to divide the area under the curve into equal-sized trapezoids and sum their areas. The problem with this is that if we use, say, 1000 trapezoids, they will each be 1072 long. Almost all the probability mass lies in the first 1072, so we lose most of the information that way.
I solved this by writing an integration function that uses geometrically-increasing bucket sizes. Instead of setting bucket i
’s bounds as (step * i, step * (i + 1))
, I set the bounds as (stepi - offset, stepi - offset + 1). (We subtract an offset so that the first bucket starts close to 0 instead of at 1.) This provides reasonably accurate results when integrating functions with long tails. I tested it in Python by integrating a Pareto curve using my method and comparing it to the output of the cumulative distribution function; my method was accurate to within 0.1% for a reasonable step
parameter (I used 1.25).
Summing Multiple Estimates
In some situations, we want to sum several estimates that follow different distributions. For example, suppose an intervention affects the probability of human extinction. This has an impact on the expected utility from humans living in the far future, wild animals, and numerous other sources. Each of these sources follows a different log-normal distribution with a different μ and σ. But they don’t really represent different measurements, so we can’t treat them as independent. (If we treated them as different measurements, we’d move the posterior closer to the measurements every time we added a new one, which doesn’t make sense because new measurements don’t actually represent new evidence.) So I combine the measurements into a single overall estimate by summing their distributions.
The sum of log-normal distributions does not have a closed-form solution. Summing log-normal distributions is apparently a common problem and I spent a while reading the literature on proposed methods, but unfortunately none of them were adequate. I ended up using a simple and reasonably efficient method.
As described in the previous section, I compute the integral of a distribution by breaking it up into buckets where each bucket approximates the area under one chunk of the curve and then taking the sum of the buckets. We can use this to sum multiple distributions. Instead of representing a distribution as a function, we can compute the values in all its buckets and then we can sum two distributions by summing their buckets. Using this method, we can sum m
distributions with n
buckets each in O(mn2) time, which in practice is about 5 seconds per distribution on my machine. I believe this is more accurate than approximation methods like Metropolis-Hastings, although I haven’t spent much time testing alternative methods (and I haven’t tested Metropolis-Hastings at all).
Then, depending on context, I either compute the expected value using the resulting sum directly, or I find the mean and variance of the sum and approximate it using a log-normal distribution with the same mean and variance.
Summing positive and negative estimates
Some outcomes of an intervention may have negative utility, and we need to add it to another outcome that has positive utility. This poses a problem: my integration function only covers positive or negative numbers, not both.
I will assume that the sum of two log-normal variables is log-normal. This assumption isn’t strictly true, but I’ve seen it used in the literature234, so I’ll assume that it is sufficiently accurate.
Using this assumption, we can find the parameters for the distribution of the sum by integrating across both measurements.
I don’t actually compute the integral across both measurements. Instead, I exploit facts about mean and variance to numerically compute the sum:
\begin{align} \text{Mean}(X - Y) = \text{E}[X - Y] = \text{E}[X] - \text{E}[Y] \text{Var}(X - Y) = \text{E}[(X - Y)^2] - (\text{E}[X - Y])^2 \end{align}
(Instead of summing positive X
and negative Y
, this treats both X
and Y
as positive and takes their difference.)
The mean is trivial to calculate given that we already know the means of X
and Y
. We can derive the variance using these equations:
\begin{align} \text{E}[(X - Y)^2] = \text{E}[X^2] - 2\text{E}[XY] + \text{E}[Y^2] (\text{E}[X - Y])^2 = (\text{E}[X] - \text{E}[Y])^2 \end{align}
E[XY]
is given by Cov(X, Y) + E[X] E[Y]
. The interpretation of the math here becomes a little questionable. In one sense, X
and Y
are strongly correlated because they share a lot of the same inputs (e.g., when we’re measuring the effect of an intervention on the far future, if we colonize more planets, both X
and Y
will be larger). But we’re summing positive and negative random variables to represent the fact that an intervention could have a positive or a negative effect, so I believe it makes more sense to treat X
and Y
as though they are independent (i.e. Cov(X, Y) = 0
).
Measurements with different priors
You many want to look at two different effects M1 and M2 of some intervention, where these two different effects have different priors. For example, if you’re looking at the effect of an intervention to alleviate factory farming, you may have different priors on M1 the amount of animal suffering prevented in the short term and M2 the long-term effects of causing humans to care more about other species. It doesn’t make any sense to combine M1 and M2 directly if they have different priors.
I resolve this by calculating the posteriors for each of the two measurements and then summing the posteriors to produce the final result. I don’t know if there’s a good philosophical justification for this, but it seems reasonable enough.
Preliminary Results
Let’s look at some of the outputs of my model, particularly the results I find most surprising. I’ll also mix in some sensitivity analysis to see how we could plausibly change these results by modifying the inputs.
The far future is overwhelmingly important, even for global poverty
Some people argue that the far future is overwhelmingly important, and therefore we should focus our philanthropic efforts on shaping it. The strongest counter-argument I’ve heard is that we don’t know enough about how to affect the far future, so we should focus on short-term interventions that we know a lot about.
If we put this argument in more formal terms, we’d say that far future outcomes have such high variance that the posterior on any far future intervention is about the same as the prior. Should we buy this claim?
Based on my model, it looks like we shouldn’t. Using inputs I find reasonable, the posterior estimate of the far future effects of GiveDirectly is about 1000 times larger than the estimate of the direct effects. (If you used a broader prior, as some have suggested, this ratio could be a lot bigger.) To make the far future effects of GiveDirectly look just barely smaller than the direct effects, you’d have to reduce the prior μ by a factor of 1005 (and even then, the far future effects of AI safety would still look bigger than GiveDirectly’s direct effects–cash transfers have much smaller effects on existential risk because they’re not particularly optimized for increasing or decreasing it).
In the past, I talked about global poverty charities as though their direct effects were all that matter and mostly ignored the far future effects. But it appears that even extremely weak confidence about the far future effects far dominates the significance of the direct effects, so I should change my behavior.
Values spreading may be better than existential risk reduction
By my previous best guess, and according to an early version of the model, AI safety work looks better than animal advocacy for positively influencing the far future. So I was surprised to see that my improved model gave the opposite result.
The first version of my model had no way to combine positive and negative effects, so I only looked at the positive outcomes in the far future and used that to estimate how valuable it was to prevent AI-related extinction. And I didn’t include all the benefits of persuading people to care more about animals6. After correcting for both of these, my updated model finds that animal advocacy is more important than AI safety research7.
The naive expected value estimate for AI safety research is still two orders of magnitude higher than the estimate for animal advocacy. But the AI safety estimate has substantially higher variance, which makes its posterior look worse. If you increase the variance on the prior, AI safety looks better with the same EV estimates. (I had to raise the σ on the prior to 3 orders of magnitude before AI safety looked better than animal advocacy, which I believe is way too big a σ.)
I was quite surprised by this result; it seems intuitively wrong that an activity meant for producing good proximate effects (i.e., preventing animals from suffering on factory farms) could be better than existential risk reduction. But I have carefully chosen the inputs to the model and believe they are reasonable. It would be silly to create this model and then not change my mind based on the evidence it presents.
Changing the prior by increasing/decreasing μ or σ does not change this result. The only plausible way to make AI safety research look better than values spreading is to substantially reduce the variance on the estimate for how much work is necessary to make progress on AI safety or to substantially decrease the estimate of the value of non-human minds.
A brief note on different types of animal advocacy:
Although corporate outreach looks particularly high impact in the short term, the long-term effects of direct advocacy are probably greater. Given how strongly far future effects dominate considerations, this makes interventions like leafleting look better than corporate outreach. Lewis Bollard of the Open Philanthropy Project has claimed that he believes corporate campaigns will have significant long-term effects; I’m open to this possibility, but I find it somewhat implausible on priors and haven’t seen good reason to believe that it is true. But either way, the most important question is which intervention most effectively shifts values in the long term.
High-leverage values spreading looks best
Update 2016-06-06: I thought my model found that AI-targeted values spreading looked best, but my calculations had a simple arithmetic error that caused the naive estimate to be off by several orders of magnitude. After adjusting for this, AI-targeted values spreading looks a bit worse than AI safety research given my original inputs. Some of the claims in this section are probably false.
I’ve heard people criticize the idea of using anti-factory farming advocacy as a way to spread good values: if you originally decided to do animal advocacy to help animals in the short term, isn’t it pretty unlikely that this is coincidentally also the best thing to do to improve the far future?
I believe there are good reasons to believe that organizations like The Humane League (THL) have unusually good positive effects on the far future–in my model’s estimate of the value of organizations like THL, they come out looking better than AI safety work. But I agree that this sort of thing is pretty unlikely to be the best intervention.
In “On Values Spreading”, I wrote about the idea of high-leverage values spreading:
Probably, some sorts of values spreading matter much, much more than others. Perhaps convincing AI researchers to care more about non-human animals could substantially increase the probability that a superintelligent AI also will care about animals. This could be highly impactful and may even be the most effective thing to do right now. (I am much more willing to consider donating to MIRI now that its executive director values non-human animals.) I don’t see anyone doing this, and if I did, I would want to see evidence that they were doing a good job; but this plausibly would be a highly effective intervention.
I used my model to estimate the posterior expected value of spreading good values to AI researchers, and it looks substantially better than anything else–by a factor of 10 or more (depending on what prior you use). By contrast, the posteriors for traditional animal advocacy and AI safety differ by only a factor of 2. This estimate has more outside view uncertainty and I haven’t looked at it in detail, but this preliminary result suggests that it’s worth spending more time looking into high-leverage values spreading.
When I talk about high-leverage values spreading, that doesn’t just mean trying to shift the values of AI researchers. I’m sure there are many possibilities I haven’t considered. That’s just the first thing I tested against my model.
A few organizations currently do things that might qualify as high-leverage values spreading. These include The Foundational Research Institute, Animal Ethics, and The Nonhuman Rights Project. I don’t know enough about these organizations, but it’s plausible that my next major donation will go to one of them.
I don’t want to claim anything too ambitious here. High-leverage values spreading is the best intervention among the ones I’ve checked using this model, but there may be other interventions that are better. (Perhaps something like convincing AI researchers to care more about safety would be more cost-effective.)
Sensitivity analysis
How do the results of this model change when you modify the inputs?
We can change the model in astronomically many ways, and I couldn’t possibly test them all, so I’ll just try some of the changes I expect a lot of people might want to make. If you really want to know what results the model would get according to your values and beliefs, you can fill out the model yourself.
As of this writing, when I put in my best guesses for all the inputs, I get that animal advocacy is about 20,000 times better than the direct effects of GiveDirectly (i.e., not including flow-through effects), AI safety research is 10,000 times better, and AI-targeted values spreading is 100,000 times better. These numbers don’t tell us how good these things are relative to GiveDirectly’s flow-through effects; I use GiveDirectly’s direct effects as a baseline because they are highly certain while the flow-through effects are not.
I put the prior μ on animal advocacy at 10 units (recall, the units are set such that the short-term benefits of GiveDirectly equal 1 unit) and the prior μ for far future effects at 100 units. If we reduce these so the prior μ for everything is 1 unit, then the posterior values for animal advocacy, AI safety, and AI-targeted values spreading are (240, 119, 1376), respectively. For the rest of the sensitivity analysis, the priors for each intervention type equal each other for simplicity’s sake.
I’m going to use a log-normal prior for this sensitivity analysis. Changing the prior opens a huge can of worms which I won’t get into. I will note, however, that the second-most plausible prior distribution is a Pareto, which makes high-expected-value, low-robustness interventions look much better than a log-normal distribution does.
Global poverty vs. animal advocacy
How can we change the priors to make AMF look better than animal advocacy? Let’s reduce μ on the prior to 0.01 units (i.e., the median intervention in our reference class8 is 1/100 as good as GiveDirectly) and keep σ at 0.5. Under a log-normal prior distribution, that means we believe that about 0.003% of interventions in our reference class are better than GiveDirectly9, and one in a billion interventions is better than AMF. (I don’t believe anyone could reasonably set a prior this low, but let’s go with it.) Now the posterior expected values for the direct effects of GiveDirectly, AMF, The Humane League (using ACE’s cost-effectiveness estimates10), and cage-free campaigns are (0.58, 2.6, 1.3, 17), respectively. (This looks at direct effects only, ignoring far future effects.) So THL looks better than GiveDirectly but worse than AMF, and cage-free campaigns still look best.
Let’s choose a more reasonable prior of 0.1. In this case, 2% of interventions in our reference class are better than GiveDirectly and 0.003% are better than AMF. I still believe this is too low (depending what we mean by our reference class8), but could see someone defending this. Under this prior, the posteriors for GiveDirectly, AMF, THL, and cage-free campaigns are (0.80, 4.4, 6.0, 62).
Perhaps you believe the average intervention in our reference class should be substantially worse than GiveDirectly, but that we should change σ such that GiveDirectly and AMF’s posterior values do not substantially differ from GiveWell’s cost-effectiveness estimates. We can make this happen by updating σ from 0.5 to 0.75. Now our numbers for GiveDirectly/AMF/THL/cage-free are (0.94, 7.7, 66, 1107). This makes the animal interventions look even better than they did with my original prior. (I haven’t talked about far future effects yet because that complicates things, but I’ll note that increasing σ to 0.75 makes the far future effects look about 100 times more important than they did with a σ of 0.5.) here’s a strong case for using this lower μ and higher σ, but the original choice of μ=1 and σ=0.5 is more conservative so I’ll stick with that for the rest of the sensitivity analysis.
A lot of people believe that animal suffering matters much less than human suffering. Let’s try incorporating this into the model. How much do you have to discount non-human animals before cage-free campaigns look worse than AMF?
My original estimate valued a day of caged-hen suffering at 1/3 as much as a day of human suffering (I briefly justify this in a separate post). If hens only suffer 1/100 as much as humans (which is about the lowest number you could plausibly defend11) then cage-free campaigns still have a posterior value of 50, making them about seven times more effective than AMF in expectation. You have to de-value hen suffering all the way down to 1/10,000 that of a human before cage-free campaigns look slightly worse than AMF (6.5 versus 7.5). And even if you de-value chickens at 1/100,000, cage-free campaigns still look better than GiveDirectly.
Suppose you combine a highly skeptical prior with a strong discount on non-human animals. Let’s use μ=0.1 and see how much we need to de-value chicken suffering to make AMF look best. At this lower prior μ, AMF’s posterior expected value is 4.4 units, so we need to make cage-free campaigns look worse than that.
If we say that chickens only suffer 1/100 as much as humans, cage-free campaigns have a posterior of 14, so this isn’t enough. We need to reduce the value of chickens to about 1/2000 before AMF looks better than cage-free campaigns with a prior μ of 0.1.
Is there any other way we can make AMF look more cost-effective than cage-free campaigns?
Let’s begin by choosing μ=0.1 and discounting hens by a factor of 100 relative to humans, which are the most conservative values that look at least semi-justifiable.
AMF’s posterior is already pretty close to its estimate, so we can’t make it look much better by assuming that its estimate is more robust. But let’s see what we can do by modifying the estimate for cage-free campaigns.
I discuss my cost-effectiveness estimate for cage-free campaigns here. In short, the estimate for the value of cage-free campaigns is determined by: (a) total expenditures on campaigns, (b) years until farms would have gone cage-free otherwise, (c) millions of cages prevented by campaigns, (d) proportion of change attributable to campaigns. At the time of this writing, my spreadsheet uses 80% credence intervals ((2, 3); (5, 10); (100, 150); (0.7, 1)) respectively.
It’s probably not reasonable to change the estimates for (a) total expenditures and (d) proportion of change attributable to cage-free campaigns, because these appear pretty robust. If we change the interval of (b) years until farms would have gone cage-free otherwise from (5, 10) to (1, 5), cage-free campaigns now have a posterior of 3.7–a bit worse than AMF’s posterior of 4.4. Lewis Bollard of the Open Philanthropy Project has claimed that 5 years is a conservative estimate, so it’s probably not reasonable to put our lower bound much lower than 5 years. But let’s say for the sake of argument that Lewis is somewhat overconfident, and put the confidence interval at (3, 10).
We’re not modifying (a) and (d), so the only input left is (c) number of cages prevented by campaigns (in millions). My original estimate of (100, 150) may be too narrow: perhaps farms unexpectedly fail to follow through on their commitments or something like that. If we widen the interval to (50, 150) and also widen our estimate of (b) to (3, 10), we get a posterior of 3.6, which makes AMF look better.
Recall that here we used the conservative μ and heavily discounted chickens relative to humans. If we didn’t do those things, it would be virtually impossible to change the inputs to our estimate in such a way that AMF looked better than cage-free campaigns.
In conclusion, if you non-trivially value chickens’ suffering, it’s pretty implausible that AMF has better direct effects than cage-free campaigns.
Animal advocacy vs. AI safety
The previous section ignored far-future effects and only examined direct effects. But when we’re talking about AI safety, we have to look at far future effects, so I’ll re-incorporate them here.
According to my best estimates, the posteriors for animal advocacy and AI safety only differ by a factor of 2. It’s not that hard to change this result. Here are some things you could do:
- Increase the uncertainty about how much animal advocacy causes good values to propagate into the far future. You’d have to greatly increase the uncertainty, but I believe this is defensible. My estimates already assume a lot of uncertainty but I might still be too optimistic.
- Assume that animal advocacy does not increase the probability of a hedonium shockwave. This makes AI safety research look about three times as good as THL.
- Assume that hedonium shockwave is not a good outcome. This makes AI safety research look about twice as good as THL.
Here are some things you might think make AI safety look better than animal advocacy, but don’t:
- Discount chicken suffering by a factor of 100 relative to human suffering and other animals/insects proportionally more.
- Moderately decrease the uncertainty about how much work is necessary to prevent unfriendly AI. (My current estimates put the σ at 1.7; you need to reduce σ to about 1 before AI safety work looks better, which is a pretty big reduction.)
Global poverty vs. AI safety
There may be a reasonable way to change the inputs to make global poverty look better than AI safety, but if it exists then it’s probably fairly complicated. The best way to figure this out is for you to put your estimates into the model and see if global poverty gets a better result than AI safety. This probably makes more sense than me trying to guess what inputs other people would want, because the calculations for far-future effects are substantially more complicated than the comparison between global poverty and animal welfare like what we saw above.
Discussion
What about systemic change?
One could argue that a quantitative model can’t properly measure the effects of something highly uncertain like systemic change. Can it?
In fact, the precise reason why my model puts such high value on animal advocacy is that it (ostensibly) begets systemic change. Long-term shifts in how people relate to animals increase the chance that a far future civilization will look friendly to all sentient beings and not just to humans. This is a hugely important outcome. I had no difficulty developing a function to estimate the value of this sort of systemic change. Some of the inputs have wide error bounds to represent my high uncertainty, but this is an inherent fact about the uncertainty of systemic change, not about the model itself. Any attempt to prioritize interventions involving systemic change must make some effort to deal with this high uncertainty.
People often claim that high-risk interventions can’t be quantified. I don’t believe this is true. Most cost-effectiveness models don’t give any weight to an estimate’s variance even though they should, which renders them nearly useless for dealing with high-risk interventions; but using a Bayesian model solves this problem.
Ways the model could be more precise
When building the model, I made some compromises for the sake of simplicity. A simpler model is easier to create, easier to reason about, and less likely to conceal mistakes. But there are some ways that I could perhaps improve the model, and may make some of these changes in the future. I list some potential planned changes on my to-do page.
What results do you get?
I want to see what happens when other people fill out the inputs to this model and what causes people to get different results. I encourage you to input your own best guesses. Then post the result in a comment here or send it to me.
Next Steps
Before I make my next large donation, I want to use my quantitative model to broadly prioritize causes and then find organizations in the best cause area(s) with promising surface signals. Then I will follow these steps for each organization I consider:
- Learn the basics about the organization: its activities, competence, future plans, and room for funding.
- Write a list of all the qualitative factors I know of that are relevant to whether I should donate to the organization.
- Build a customized quantitative model for that organization to estimate its impact. Make the inputs as robust as possible. Figure out what I can learn to reduce ambiguity and then learn it.
- Go through my list of qualitative factors and make sure the model accounts for all of them.
Whenever possible, I will publish the results of the model and be as transparent as I can about my process. I want this model to be useful for other people; if there’s anything I can do to make it more useful, I will do my best to make it happen.
I built this model to improve my own decision-making, but I’m interested in how valuable it is to other people because that affects how much time I should put into work like this. If you’ve changed your donations or altruism-related plans as a result of my work, please let me know.
Thanks to Jacy Anthis, Tom Ash, Gabe Bankman-Fried, Linda Neavel Dickens, Kieran Greig, Peter Hurford, Jake McKinnon, Mindy McTeigue, Kelsey Piper, Buck Shlegeris, Claire Zabel, and a few folks on the Effective Altruism Editing and Review Facebook group for providing feedback on this project.
Notes
-
A brief justification: we can show that estimates should look like log-normal distributions from two facts. (1) The probability of the estimate being incorrect by a factor of X diminishes as X grows; (2) the estimate is probably off by a multiplicative factor, not an additive factor (that is, for estimate
X
, getting2X
orX/2
would have been equally likely). This doesn’t strictly mandate that we use a log-normal estimate distribution, but log-normal does work well. You could also justify this by saying that an estimate is the product of multiple inputs, and by the Central Limit Theorem, the product of independent distributions is approximately log-normal. ↩ -
Mehta, Neelesh B., et al. “Approximating a sum of random variables with a lognormal.” Wireless Communications, IEEE Transactions on 6.7 (2007): 2690-2699. ↩
-
Fenton, Lawrence F. “The sum of log-normal probability distributions in scatter transmission systems.” Communications Systems, IRE Transactions on 8.1 (1960): 57-67. ↩
-
Rook, Christopher J., and Mitchell Kerman. “Approximating the Sum of Correlated Lognormals: An Implementation.” Available at SSRN 2653337 (2015). ↩
-
It may be better to represent the prior for far-future effects as a two-sided log-normal distribution centered at zero (i.e., a regular log-normal distribution combined with the same distribution flipped over the Y axis). This would require writing a lot of new code, but I’d like to implement it for some future revision of the model. I don’t expect this would affect the results as much as shifting the mean toward zero does, because shifting the mean makes extreme results exponentially less likely, but switching to a two-sided log-normal distribution would not change much–it would reduce the prior probability density at each point by a factor of 2, instead of the much larger reduction caused by changing the μ parameter. ↩
-
In particular, I neglected the possibility that animal advocacy could marginally increase the probability of extremely good far future outcomes by, for example, making people more accepting of the idea that different kinds of minds matter. I don’t consider this particularly likely, but extremely good future outcomes matter so much that even a fairly small probability makes a big difference. ↩
-
I find it somewhat amusing that I got this result considering how often I hear people claim that animal advocacy doesn’t have flow-through effects. ↩
-
I’m being intentionally vague here because it’s non-obvious what our reference class should be. I discuss this briefly in On Priors. ↩ ↩2
- \[0.0003 \simeq 1 - \text{LognormCDF}(1, \sigma=0.5, \mu=0.01)\]
-
When I use ACE’s cost-effectiveness credence intervals, I expand them to account for possible overconfidence. ↩
-
I expect that a lot of readers discount chickens by more than 100x versus humans or ignore chickens entirely. I couldn’t find any good explanation of why this is unreasonable, but Peter Singer explains why animals ought to have moral value, and this essay by Brian Tomasik discusses some relevant evidence. ↩